What is your favourite Haskell "aha" moment?

Friends In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness. So I thought I'd ask you for help. War stories perhaps - how using Haskell worked (or didn't) for you. But rather than talk generalities, I'd love to illustrate with copious examples of beautiful code. * Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an "aha" moment, or that feeling of joy when you truly make sense of something for the first time. The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn't going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup. To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It's so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes. Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes. But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done. Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.) Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience. Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me. Simon

If you want a sorting algorithm, go for bottom-up merge sort. That's a *real* merge sort (unlike the non-randomized "QuickSort" you mention), and dead simple. On Wed, Jul 11, 2018, 8:10 AM Simon Peyton Jones via Haskell-Cafe < haskell-cafe@haskell.org> wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On Wed, Jul 11, 2018 at 12:10:21PM +0000, Simon Peyton Jones via Haskell-Cafe wrote:
So I thought I'd ask you for help. War stories perhaps - how using Haskell worked (or didn't) for you. But rather than talk generalities, I'd love to illustrate with copious examples of beautiful code. * Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an "aha" moment, or that feeling of joy when you truly make sense of something for the first time.
If your most of your audience uses a dynamically typed language, I would introduce type inference and how small, painful bugs can be tracked down by the compiler without having to write a single type signature bar top level. Another good one is implementing a tricky function with holes (in what I have seen described as `hole-driven` development. Unfortunately, one of Haskell strongest suit (ease of refactoring) doesn't really shine in bite-sized examples!

Hello, Francesco! For me - phantom types. And may be types families. Also good is "map" function which replace visitor pattern is short way, but it exists in most modern languages 11.07.2018 15:31, Francesco Ariis wrote:
On Wed, Jul 11, 2018 at 12:10:21PM +0000, Simon Peyton Jones via Haskell-Cafe wrote:
So I thought I'd ask you for help. War stories perhaps - how using Haskell worked (or didn't) for you. But rather than talk generalities, I'd love to illustrate with copious examples of beautiful code. * Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an "aha" moment, or that feeling of joy when you truly make sense of something for the first time. If your most of your audience uses a dynamically typed language, I would introduce type inference and how small, painful bugs can be tracked down by the compiler without having to write a single type signature bar top level.
Another good one is implementing a tricky function with holes (in what I have seen described as `hole-driven` development.
Unfortunately, one of Haskell strongest suit (ease of refactoring) doesn't really shine in bite-sized examples! _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

My goodness... Neither Simon, nor the five responders ever mention *laziness*! For me it was THE "aha" moment, or rather a long period... The problem with popularizing laziness is that too many short comments (on Internet) on it are not serious. People speak mainly about infinite lists (as if somebody really cared about this "infinity"), or that lazy program do not evaluate some expressions, which should *economise* some time, which usually is not true... * For me, lazy programs permit to represent dynamic processes as data. Iterations as mathematical structures. Co-recursive perturbational schemes (or asymptotic expansions, etc.), which are 10 or more times shorter than the orthodox approaches, and remain readable, and natural. Laziness makes it possible to play with continuations, thus: "making future explicit", in a particularly constructive manner. =========================== Second section... Somebody mentioned "type families". Why not, but for an audience outside of the FP realm?? If something about types, then for sure the automatic polymorphic inference, which remains a bit mysterious for many people, including my (comp. sci.) students. And the /*Curry-Howard correspondence*/. All the best. Jerzy Karczmarczuk /Caen, France/ --- L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast. https://www.avast.com/antivirus

There was a Functional Programming Meetup in CT recently, by people doing
genomics[1]
Things they emphasized were DSLs, and using parser-combinators and
pretty-printers to do so.
A lot of the work relates to reading data in from standard genomic
databases, and being able to represent what comes out.
Alan
[1]
https://www.meetup.com/Cape-Town-Functional-Programmers/events/242900483/
On 11 July 2018 at 15:07, Jerzy Karczmarczuk
My goodness... Neither Simon, nor the five responders ever mention *laziness*!
For me it was THE "aha" moment, or rather a long period...
The problem with popularizing laziness is that too many short comments (on Internet) on it are not serious. People speak mainly about infinite lists (as if somebody really cared about this "infinity"), or that lazy program do not evaluate some expressions, which should *economise* some time, which usually is not true...
*
For me, lazy programs permit to represent dynamic processes as data.
Iterations as mathematical structures.
Co-recursive perturbational schemes (or asymptotic expansions, etc.), which are 10 or more times shorter than the orthodox approaches, and remain readable, and natural.
Laziness makes it possible to play with continuations, thus: "making future explicit", in a particularly constructive manner.
===========================
Second section...
Somebody mentioned "type families".
Why not, but for an audience outside of the FP realm?? If something about types, then for sure the automatic polymorphic inference, which remains a bit mysterious for many people, including my (comp. sci.) students. And the *Curry-Howard correspondence*.
All the best.
Jerzy Karczmarczuk
/Caen, France/
https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient Garanti sans virus. www.avast.com https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient <#m_5258079610623782001_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Not sure if this helps, but the following two code snippets (taken from Rosetta code) for computing the generalized Cartesian product of a list of lists might exemplify a very important aspect of Haskell: Haskell programmers must invest a lot of time into learning functional programming, but once they speak this common language, the code tends to be easier to understand, maintain, and reason about. The Java solution requires to think about all sort of lower level details on how to iterate the lists and construct the result. It even mixes iterations with more "functional" idioms: import static java.util.Arrays.asList; import static java.util.Collections.emptyList; import static java.util.Optional.of; import static java.util.stream.Collectors.toList; import java.util.List; public class CartesianProduct { public List<?> product(List<?>... a) { if (a.length >= 2) { List<?> product = a[0]; for (int i = 1; i < a.length; i++) { product = product(product, a[i]); } return product; } return emptyList(); } private <A, B> List<?> product(List<A> a, List<B> b) { return of(a.stream() .map(e1 -> of(b.stream().map(e2 -> asList(e1, e2)).collect(toList())).orElse(emptyList())) .flatMap(List::stream) .collect(toList())).orElse(emptyList()); } } A programmer that spent a lot of time studying Monads and playing around with them, and that understands the Monad instances for lists, might come up with the following solution in Haskell: cartProdN :: [[a]] -> [[a]] cartProdN = sequence This also made me realize of two things: 0. Haskell will never be mainstream, because there are not a lot of programmers out there who are willing to do the investment required for learning the necessary concepts to understand and write code like the one shown above. 1. Haskell has rendered me unemployable for almost all jobs that do not involve Haskell codebases. Imagine having to maintain code like the first snippet, replicated in a 10K LOC codebase. And yes, I did use the code above in production :) On Wed, Jul 11, 2018 at 2:10 PM Simon Peyton Jones via Haskell-Cafe < haskell-cafe@haskell.org> wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.


This also made me realize of two things: 0. Haskell will never be mainstream, because there are not a lot of programmers out there who are willing to do the investment required for learning the necessary concepts to understand and write code like the one shown above.
Replace "Haskell" with "Java" in the previous sentence, and you would have an equally truthful statement. :) I spent years getting comfortable with OO languages, and then I spent years getting familiar with Haskell. For someone who only knows Haskell (and I know such a person), I couldn't imagine teaching them Java well enough to write that code!
I speak only from my own narrow perspective. I'd say programming is hard, but functional programming is harder. Maybe that's why Java replaced Haskell in some universities curricula https://chrisdone.com/posts/dijkstra-haskell-java. For some reason most programmers I know are not scared of learning OO, but they fear functional programming. I think the reason might be that OO concepts like inheritance and passing messages between objects are a bit more concrete and easier to grasp (when you present toy examples at least). Then you have design patterns, which have intuitive names and give some very general guidelines that one can try after reading them (and add his/her own personal twist). I doubt people can read the Monad laws and make any sense out of them at the first try. Maybe FP and OO are perceived as equally hard, but that was not my impression so far.
which is customary?) ... import "static" ... "public", "class", "private" ... eager evaluation ... pass-by-reference/whatever ... procedural statements ... these things are all mind-boggling if you don't learn them early.
In short, I don't think the investment required in Haskell is different than any other programming language. As with natural languages, there are no absolute difficulties, only relative ones.
Well, I guess that's subjective (as our two opinions illustrate ;)). It'd be nice to have some empirical evidence of this, but I couldn't find any
Semicolons... Brackets *and* whitespace delineation (which is required, and paper on the subject ...
(This might actually be a useful point to bring up when speaking to non-Haskellers, so perhaps this message isn't as off-topic as I initially assumed.) _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

but functional programming is harder.
There's no doubt that languages like Haskell go through extra efforts to make it *harder* to write incorrect code, and along the way they also make it harder to write code at all. So maybe it will take your guy a week to get the code written in Haskell, whereas a couple days were sufficient in Java. But that's without counting the subsequent month during which the Java code will have to be debugged before it actually works ;-) IIRC it was Bob Harper who said a good programming language should be hard to write (but easy to read, of course)? Stefan

I think it's important to keep in mind that the correct trade-off for one
situation is not the correct trade-off for another. This is a great
example! On one side of the spectrum, you've got situations where Idris or
Agda are good fits, where it's critically important to know that code is
correct, even at the expense of significant complexity or time. But there
is another side of that spectrum. It's probably best characterized by
education: there are no users, bugs that aren't found don't matter at all,
and even when you know for sure that something is wrong, it would be great
if you can let it fail and *watch* it go wrong; stopping someone who's
learning to tell them they got something wrong before they understand WHY
is bad teaching.
I've been working for years now on using (a variant of) Haskell in early
education, as young as 10 to 11 years old. Looking at this experience, I
agree whole-heartedly that regardless of what's best in a professional
setting, there's still something about Haskell that's far more difficult to
learn than Python and Java. It's not about the incidental complexity of
the language, which can usually be avoided. It's about the way everything
just looks so easy once it's written, but new programmers struggle mightily
to figure out how to get started in writing it. People don't understand
how to build things compositionally.
As an aside: I know it's popular among the functional programming world to
hypothesize that this is because people have used imperative languages
first. I can tell you, though, that the hypothesis is wrong. I spend a
good bit of my time teaching students with no previous programming
experience in any language. They also struggle with it, but they
understand imperative programming intuitively. Mathematics teachers also
know this, and it's why they so often fall back to teaching step-by-step
processes instead of talking about subexpressions having meaning. Think
about how you learned the "order of operations", which obviously should be
understood as a question of parsing and identifying subexpressions, but is
always taught as "you multiply before you add" because that's what gets
correct answers on exams.
Incidentally, realizing this makes me more determined to teach Haskell and
compositional thinking at a younger age. It might not be easy, but you
don't get far in mathematics without grasping the idea of building up
abstract objects through composition. This shift from thinking about "how
to get it done" to thinking about "what it means" is a huge step toward
understanding the world around us, and should be pretty far up on the
priority list. So I'm not badmouthing Haskell here. I'm just saying we
should realize that there's a very real sense in which it is legitimately
HARDER to understand. No use being in denial about that.
On Wed, Jul 11, 2018 at 11:10 AM Stefan Monnier
but functional programming is harder.
There's no doubt that languages like Haskell go through extra efforts to make it *harder* to write incorrect code, and along the way they also make it harder to write code at all.
So maybe it will take your guy a week to get the code written in Haskell, whereas a couple days were sufficient in Java. But that's without counting the subsequent month during which the Java code will have to be debugged before it actually works ;-)
IIRC it was Bob Harper who said a good programming language should be hard to write (but easy to read, of course)?
Stefan
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 11.07.2018 um 16:36 schrieb Damian Nadales:
I speak only from my own narrow perspective. I'd say programming is hard, but functional programming is harder.
Maybe that's why Java replaced Haskell in some universities curricula The considerations are marketable skills. A considerable fraction of students is looking at the curriculum and at job offers, and if they find that the lists don't match, they will go to another university. Also, industry keeps lobbying for teaching skills that they can use. Industry can give money to universities so this gives them influence on
Actually it's pretty much the opposite, I hear from teachers. the curriculum (and only if they get time to talk the topic over with the dean). This aspect can vary considerably between countries, depending on how much money the universities tend to acquire from industry.
https://chrisdone.com/posts/dijkstra-haskell-java. For some reason most programmers I know are not scared of learning OO, but they fear functional programming.
Programmers were *very* scared of OO in the nineties. It took roughly a decade or two (depending on where you put the starting point) to get comfortable with OO.
I think the reason might be that OO concepts
like inheritance and passing messages between objects are a bit more concrete and easier to grasp (when you present toy examples at least).
OO is about how to deal with having to pack everything into its own class (and how to arrange stuff into classes). Functional is about how to deal with the inability to update. Here, the functional camp actually has the easier job, because you can just tell people to just write code that creates new data objects and get over with it. Performance concerns can be handwaved away by saying that the compiler is hyper-aggressive, and "you can look at the intermediate code if you suspect the compiler is the issue". (Functional is a bit similar to SQL here, but the SQL optimizers are much less competent than GHC at detecting optimization opportunities.)
Then you have design patterns, which have intuitive names and give some very general guidelines that one can try after reading them (and add his/her own personal twist). I doubt people can read the Monad laws and make any sense out of them at the first try.
That's true, but much of the misconceptions around monads from the first days have been cleared up. But yes the monad laws are too hard to read. OTOH you won't be able to read the Tree code in the JDK without the explanations either.

I used to teach undergrad OOP nonsense. I have been teaching FP for 15 years. [^1] The latter is *way* easier. Existing programmers are more difficult than children, but still way easier to teach FP than all the other stuff. [^1]: Canberra anyone? https://qfpl.io/posts/2018-canberra-intro-to-fp/ On 07/12/2018 04:23 PM, Joachim Durchholz wrote:
Am 11.07.2018 um 16:36 schrieb Damian Nadales:
I speak only from my own narrow perspective. I'd say programming is hard, but functional programming is harder.
Actually it's pretty much the opposite, I hear from teachers.
Maybe that's why Java replaced Haskell in some universities curricula The considerations are marketable skills. A considerable fraction of students is looking at the curriculum and at job offers, and if they find that the lists don't match, they will go to another university. Also, industry keeps lobbying for teaching skills that they can use. Industry can give money to universities so this gives them influence on the curriculum (and only if they get time to talk the topic over with the dean). This aspect can vary considerably between countries, depending on how much money the universities tend to acquire from industry.
https://chrisdone.com/posts/dijkstra-haskell-java. For some reason most programmers I know are not scared of learning OO, but they fear functional programming.
Programmers were *very* scared of OO in the nineties. It took roughly a decade or two (depending on where you put the starting point) to get comfortable with OO.
I think the reason might be that OO concepts
like inheritance and passing messages between objects are a bit more concrete and easier to grasp (when you present toy examples at least).
OO is about how to deal with having to pack everything into its own class (and how to arrange stuff into classes). Functional is about how to deal with the inability to update. Here, the functional camp actually has the easier job, because you can just tell people to just write code that creates new data objects and get over with it. Performance concerns can be handwaved away by saying that the compiler is hyper-aggressive, and "you can look at the intermediate code if you suspect the compiler is the issue". (Functional is a bit similar to SQL here, but the SQL optimizers are much less competent than GHC at detecting optimization opportunities.)
Then you have design patterns, which have intuitive names and give some very general guidelines that one can try after reading them (and add his/her own personal twist). I doubt people can read the Monad laws and make any sense out of them at the first try.
That's true, but much of the misconceptions around monads from the first days have been cleared up. But yes the monad laws are too hard to read. OTOH you won't be able to read the Tree code in the JDK without the explanations either. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Tony, I am curious on your attitude towards multi-paradigm and ML-like languages. I agree that functional programming is easily the better of the bundle in many forms of application logic and elegance (which is why I have come to love Scheme and Haskell), but do you see any room for those languages like F# or Rust which have large capacities for FP but are either functional-first (but not pure) or a hybrid? Brett Gilio On 07/12/2018 01:35 AM, Tony Morris wrote:
I used to teach undergrad OOP nonsense. I have been teaching FP for 15 years. [^1]
The latter is *way* easier. Existing programmers are more difficult than children, but still way easier to teach FP than all the other stuff.
[^1]: Canberra anyone? https://qfpl.io/posts/2018-canberra-intro-to-fp/
On 07/12/2018 04:23 PM, Joachim Durchholz wrote:
Am 11.07.2018 um 16:36 schrieb Damian Nadales:
I speak only from my own narrow perspective. I'd say programming is hard, but functional programming is harder.
Actually it's pretty much the opposite, I hear from teachers.
Maybe that's why Java replaced Haskell in some universities curricula The considerations are marketable skills. A considerable fraction of students is looking at the curriculum and at job offers, and if they find that the lists don't match, they will go to another university. Also, industry keeps lobbying for teaching skills that they can use. Industry can give money to universities so this gives them influence on the curriculum (and only if they get time to talk the topic over with the dean). This aspect can vary considerably between countries, depending on how much money the universities tend to acquire from industry.
https://chrisdone.com/posts/dijkstra-haskell-java. For some reason most programmers I know are not scared of learning OO, but they fear functional programming.
Programmers were *very* scared of OO in the nineties. It took roughly a decade or two (depending on where you put the starting point) to get comfortable with OO.
I think the reason might be that OO concepts
like inheritance and passing messages between objects are a bit more concrete and easier to grasp (when you present toy examples at least).
OO is about how to deal with having to pack everything into its own class (and how to arrange stuff into classes). Functional is about how to deal with the inability to update. Here, the functional camp actually has the easier job, because you can just tell people to just write code that creates new data objects and get over with it. Performance concerns can be handwaved away by saying that the compiler is hyper-aggressive, and "you can look at the intermediate code if you suspect the compiler is the issue". (Functional is a bit similar to SQL here, but the SQL optimizers are much less competent than GHC at detecting optimization opportunities.)
Then you have design patterns, which have intuitive names and give some very general guidelines that one can try after reading them (and add his/her own personal twist). I doubt people can read the Monad laws and make any sense out of them at the first try.
That's true, but much of the misconceptions around monads from the first days have been cleared up. But yes the monad laws are too hard to read. OTOH you won't be able to read the Tree code in the JDK without the explanations either. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

I wouldn't say Rust has a large capacity for FP. I am not familiar with F#. The thing that makes FP infeasible in Rust is not the lack of purity but rather the fact that affine types make it difficult to treat functions as first-class values. On 07/12/2018 01:40 AM, Brett Gilio wrote:
Tony,
I am curious on your attitude towards multi-paradigm and ML-like languages. I agree that functional programming is easily the better of the bundle in many forms of application logic and elegance (which is why I have come to love Scheme and Haskell), but do you see any room for those languages like F# or Rust which have large capacities for FP but are either functional-first (but not pure) or a hybrid?
Brett Gilio
On 07/12/2018 01:35 AM, Tony Morris wrote:
I used to teach undergrad OOP nonsense. I have been teaching FP for 15 years. [^1]
The latter is *way* easier. Existing programmers are more difficult than children, but still way easier to teach FP than all the other stuff.
[^1]: Canberra anyone? https://qfpl.io/posts/2018-canberra-intro-to-fp/
On 07/12/2018 04:23 PM, Joachim Durchholz wrote:
Am 11.07.2018 um 16:36 schrieb Damian Nadales:
I speak only from my own narrow perspective. I'd say programming is hard, but functional programming is harder.
Actually it's pretty much the opposite, I hear from teachers.
Maybe that's why Java replaced Haskell in some universities curricula The considerations are marketable skills. A considerable fraction of students is looking at the curriculum and at job offers, and if they find that the lists don't match, they will go to another university. Also, industry keeps lobbying for teaching skills that they can use. Industry can give money to universities so this gives them influence on the curriculum (and only if they get time to talk the topic over with the dean). This aspect can vary considerably between countries, depending on how much money the universities tend to acquire from industry.
https://chrisdone.com/posts/dijkstra-haskell-java. For some reason most programmers I know are not scared of learning OO, but they fear functional programming.
Programmers were *very* scared of OO in the nineties. It took roughly a decade or two (depending on where you put the starting point) to get comfortable with OO.
I think the reason might be that OO concepts
like inheritance and passing messages between objects are a bit more concrete and easier to grasp (when you present toy examples at least).
OO is about how to deal with having to pack everything into its own class (and how to arrange stuff into classes). Functional is about how to deal with the inability to update. Here, the functional camp actually has the easier job, because you can just tell people to just write code that creates new data objects and get over with it. Performance concerns can be handwaved away by saying that the compiler is hyper-aggressive, and "you can look at the intermediate code if you suspect the compiler is the issue". (Functional is a bit similar to SQL here, but the SQL optimizers are much less competent than GHC at detecting optimization opportunities.)
Then you have design patterns, which have intuitive names and give some very general guidelines that one can try after reading them (and add his/her own personal twist). I doubt people can read the Monad laws and make any sense out of them at the first try.
That's true, but much of the misconceptions around monads from the first days have been cleared up. But yes the monad laws are too hard to read. OTOH you won't be able to read the Tree code in the JDK without the explanations either. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

I am afraid that it can lead to flame again, but F# has super-capacity: you can check measuring units, type providers, computation expressions, active patterns, static/dynamic types constraints, constraints on existing method, etc... It's clean, borrows some ideas from Haskell, some are original and Haskell borrows them (but with worse implementation). IMHO for children teaching to FP F# is the best. Even more, currently C# also has a lot of FP features (https://github.com/dotnet/csharplang/blob/master/proposals/patterns.md#arith... -- is not it super easy and beauty?). Rust is more low level: you should think about memory "management", OOP has some problems... And serious argument for children teaching: salary trends (joke sure) :-) But you can compare salary in F# and Haskell, for example - people often choice language after check current salaries in the market. Also F# is more focused on realistic tasks and business value. It lacks performance, UWP yet (but in progress)... To feel how F# is sexy compare Web application written in Websharper and in any Haskell framework. Haskell is beauty but I'm afraid its fate unfortunately will be the same as one of Common Lisp, NetBSD, etc - it's ground for ideas and experiments and has disputable design. Also it's more-more difficult to teach children to Haskell than to F#... IMHO is general to teach FP is more easy than to teach OOP if FP is not Haskell (some language which targets more eager/efficient/dynamic/real goals instead of abstract types playing). 12.07.2018 13:28, Vanessa McHale wrote:
I wouldn't say Rust has a large capacity for FP. I am not familiar with F#. The thing that makes FP infeasible in Rust is not the lack of purity but rather the fact that affine types make it difficult to treat functions as first-class values.
On 07/12/2018 01:40 AM, Brett Gilio wrote:
Tony,
I am curious on your attitude towards multi-paradigm and ML-like languages. I agree that functional programming is easily the better of the bundle in many forms of application logic and elegance (which is why I have come to love Scheme and Haskell), but do you see any room for those languages like F# or Rust which have large capacities for FP but are either functional-first (but not pure) or a hybrid?
Brett Gilio
On 07/12/2018 01:35 AM, Tony Morris wrote:
I used to teach undergrad OOP nonsense. I have been teaching FP for 15 years. [^1]
The latter is *way* easier. Existing programmers are more difficult than children, but still way easier to teach FP than all the other stuff.
[^1]: Canberra anyone? https://qfpl.io/posts/2018-canberra-intro-to-fp/
On 07/12/2018 04:23 PM, Joachim Durchholz wrote:
Am 11.07.2018 um 16:36 schrieb Damian Nadales:
I speak only from my own narrow perspective. I'd say programming is hard, but functional programming is harder. Actually it's pretty much the opposite, I hear from teachers.
Maybe that's why Java replaced Haskell in some universities curricula The considerations are marketable skills. A considerable fraction of students is looking at the curriculum and at job offers, and if they find that the lists don't match, they will go to another university. Also, industry keeps lobbying for teaching skills that they can use. Industry can give money to universities so this gives them influence on the curriculum (and only if they get time to talk the topic over with the dean). This aspect can vary considerably between countries, depending on how much money the universities tend to acquire from industry.
https://chrisdone.com/posts/dijkstra-haskell-java. For some reason most programmers I know are not scared of learning OO, but they fear functional programming. Programmers were *very* scared of OO in the nineties. It took roughly a decade or two (depending on where you put the starting point) to get comfortable with OO.
like inheritance and passing messages between objects are a bit more concrete and easier to grasp (when you present toy examples at least). OO is about how to deal with having to pack everything into its own class (and how to arrange stuff into classes). Functional is about how to deal with the inability to update. Here,
I think the reason might be that OO concepts the functional camp actually has the easier job, because you can just tell people to just write code that creates new data objects and get over with it. Performance concerns can be handwaved away by saying that the compiler is hyper-aggressive, and "you can look at the intermediate code if you suspect the compiler is the issue". (Functional is a bit similar to SQL here, but the SQL optimizers are much less competent than GHC at detecting optimization opportunities.)
Then you have design patterns, which have intuitive names and give some very general guidelines that one can try after reading them (and add his/her own personal twist). I doubt people can read the Monad laws and make any sense out of them at the first try. That's true, but much of the misconceptions around monads from the first days have been cleared up. But yes the monad laws are too hard to read. OTOH you won't be able to read the Tree code in the JDK without the explanations either.
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On 07/12/2018 06:46 AM, PY wrote: written in Websharper and in any Haskell framework. Haskell is beauty
but I'm afraid its fate unfortunately will be the same as one of Common Lisp, NetBSD, etc - it's ground for ideas and experiments and has disputable design. Also it's more-more difficult to teach children to Haskell than to F#...
I wonder if this is simply a result of the marketing of the language, itself, rather than the strength of the language. I agree, F# has a lot of beauty, but there remain many things that Haskell has a leg up on that F# lacks, like dependent types. -- Brett Gilio brettg@posteo.net | bmg@member.fsf.org Free Software -- Free Society!

13.07.2018 02:52, Brett Gilio wrote:
On 07/12/2018 06:46 AM, PY wrote: written in Websharper and in any Haskell framework. Haskell is beauty
but I'm afraid its fate unfortunately will be the same as one of Common Lisp, NetBSD, etc - it's ground for ideas and experiments and has disputable design. Also it's more-more difficult to teach children to Haskell than to F#... https://jackfoxy.github.io/DependentTypes/ https://github.com/caindy/DependentTypesProvider Discussion: https://news.ycombinator.com/item?id=15852517
Also F# has F* ;)
I wonder if this is simply a result of the marketing of the language, itself, rather than the strength of the language. I agree, F# has a lot of beauty, but there remain many things that Haskell has a leg up on that F# lacks, like dependent types IMHO there are several reasons:
1. Haskell limits itself to lambda-only. Example, instead to add other abstractions and to become modern MULTI-paradigm languages, it keeps lambda, so record accessors leading to names collision will lead to adding of 1,2 extensions to the language instead to add standard syntax (dot, sharp, something similar). So, point #1 is limitation in abstraction: monads, transformers, anything - is function. It's not good. There were such languages already: Forth, Joy/Cat, APL/J/K... Most of them look dead. When you try to be elegant, your product (language) died. This is not my opinion, this is only my observation. People like diversity and variety: in food, in programming languages, in relations, anywhere :) 2. When language has killer app and killer framework, IMHO it has more chances. But if it has _killer ideas_ only... So, those ideas will be re-implemented in other languages and frameworks but with more simple and typical syntax :) It's difficult to compete with product, framework, big library, but it's easy to compete with ideas. It's an observation too :-) You can find it in politics for example. Or in industry. To repeat big solution is more difficult, but we are neutrally to languages, language itself is not argument for me. Argument for me (I am usual developer) are killer apps/frameworks/libraries/ecosystem/etc. Currently Haskell has stack only - it's very good, but most languages has similar tools (not all have LTS analogue, but big frameworks are the same).

The idea that Haskell is in the same category as Forth or APL is completely wrong and the idea that Haskell only has stack for tooling is just plain wrong. Haskell already has libraries that are superior to anything else available for certain use cases. The idea that abstraction occurs only over functions is false. As of GHC 8.2.2, one can abstract over modules as well. Adding special syntax for record accesses would be inadvisable when principled approaches such as row polymorphism exist. On 07/13/2018 02:38 AM, PY wrote:
13.07.2018 02:52, Brett Gilio wrote:
On 07/12/2018 06:46 AM, PY wrote: written in Websharper and in any Haskell framework. Haskell is beauty
but I'm afraid its fate unfortunately will be the same as one of Common Lisp, NetBSD, etc - it's ground for ideas and experiments and has disputable design. Also it's more-more difficult to teach children to Haskell than to F#... https://jackfoxy.github.io/DependentTypes/ https://github.com/caindy/DependentTypesProvider Discussion: https://news.ycombinator.com/item?id=15852517
Also F# has F* ;)
I wonder if this is simply a result of the marketing of the language, itself, rather than the strength of the language. I agree, F# has a lot of beauty, but there remain many things that Haskell has a leg up on that F# lacks, like dependent types IMHO there are several reasons:
1. Haskell limits itself to lambda-only. Example, instead to add other abstractions and to become modern MULTI-paradigm languages, it keeps lambda, so record accessors leading to names collision will lead to adding of 1,2 extensions to the language instead to add standard syntax (dot, sharp, something similar). So, point #1 is limitation in abstraction: monads, transformers, anything - is function. It's not good. There were such languages already: Forth, Joy/Cat, APL/J/K... Most of them look dead. When you try to be elegant, your product (language) died. This is not my opinion, this is only my observation. People like diversity and variety: in food, in programming languages, in relations, anywhere :)
2. When language has killer app and killer framework, IMHO it has more chances. But if it has _killer ideas_ only... So, those ideas will be re-implemented in other languages and frameworks but with more simple and typical syntax :) It's difficult to compete with product, framework, big library, but it's easy to compete with ideas. It's an observation too :-) You can find it in politics for example. Or in industry. To repeat big solution is more difficult, but we are neutrally to languages, language itself is not argument for me. Argument for me (I am usual developer) are killer apps/frameworks/libraries/ecosystem/etc. Currently Haskell has stack only - it's very good, but most languages has similar tools (not all have LTS analogue, but big frameworks are the same).
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 13.07.2018 um 09:38 schrieb PY:
1. Haskell limits itself to lambda-only. Example, instead to add other abstractions and to become modern MULTI-paradigm languages,
"modern"? That's not an interesting property. "maintainable", "expressive" - THESE are interesting. Multi-paradigm can help, but if overdone can hinder it - the earliest multi-paradigm language I'm aware of was PL/I, and that was a royal mess I hear.
So, point #1 is limitation in abstraction: monads, transformers, anything - is function. It's not good.
Actually limiting yourself to a single abstraciton tool can be good. This simplifies semantics and makes it easier to build stuff on top of it. Not that I'm saying that this is necessarily the best thing.
There were such languages already: Forth, Joy/Cat, APL/J/K... Most of them look dead. Which proves nothing, because many multi-paradigm languages look dead, too.
When you try to be elegant, your product (language) died. Proven by Lisp... er, disproven.
This is not my opinion, this is only my observation. People like diversity and variety: in food, in programming languages, in relations, anywhere :)
Not in programming languages. Actually multi-paradigm is usually a bad idea. It needs to be done in an excellent fashion to create something even remotely usable, while a single-paradigm language is much easier to do well. And in practice, bad language design has much nastier consequences than leaving out some desirable feature.
2. When language has killer app and killer framework, IMHO it has more chances. But if it has _killer ideas_ only... So, those ideas will be re-implemented in other languages and frameworks but with more simple and typical syntax :)
"Typical" is in the eye of the beholder, so that's another non-argument.
It's difficult to compete with product, framework, big library, but it's easy to compete with ideas. It's an observation too :-)
Sure, but Haskell has product, framework, big library. What's missing is commitment by a big company, that's all. Imagine Google adopting Haskell, committing to building libraries and looking for Haskell programmers in the streets - all of a sudden, Haskell is going to be the talk of the day. (Replace "Google" with whatever big-name company with deep pockets: Facebook, MS, IBM, you name it.)
language itself is not argument for me.
You are arguing an awful lot about missing language features ("multi-paradigm") to credibly make that statement.
Argument for me (I am usual developer) are killer apps/frameworks/libraries/ecosystem/etc. Currently Haskell has stack only - it's very good, but most languages has similar tools (not all have LTS analogue, but big frameworks are the same).
Yeah, a good library ecosystem is very important, and from the reports I see on this list it's not really good enough. The other issue is that Haskell's extensions make it more difficult to have library code interoperate. Though that's a trade-off: The freedom to add language features vs. full interoperability. Java opted for the other opposite: 100% code interoperability at the cost of a really annoying language evolution process, and that gave it a huge library ecosystem. But... I'm not going to make the Haskell developers' decisions. If they don't feel comfortable with reversing the whole culture and make interoperability trump everything else, then I'm not going to blame them. I'm not even going to predict anything about Haskell's future, because my glass orb is out for repairs and I cannot currently predict the future. Regards, Jo

I understand that my points are disputable, sure, example, multi-pardigm Oz – dead 😊 Any rule has exceptions. But my point was that people don’t like elegant and one-abstraction languages. It’s my observation. For me, Smalltalk was good language (mostly dead, except Pharo, which looks cool). Forth – high-level “stack-around-assembler”, mostly dead (Factor looks abandoned, only 8th looks super cool, but it’s not free). What else? Lisp? OK, there are SBCL, Clojure, Racket... But you don’t find even Clojure in languages trends usually. APL, J – super cool! Seems dead (I don’t know what happens with K). ML, SML? By the way, Haskell role was to kill SML community, sure it is sad to acknowledge it, but it’s 100% true... Haskell try to be minimalistic and IMHO this can lead to death. Joachim, I’m not talking “it’s good/it’s bad”, “multiparadigm is good” or else... I don’t know what is right. It’s my observations only. Looks like it can happen. If we will look to Haskell history then we see strange curve. I’ll try to describe it with humour, so, please, don;t take it seriously 😊 - Let’s be pure lambda fanatics! - Is it possible to create a big application? - Is it possible to compile and optimize it?! - Let’s try... - Wow, it’s possible!!! (sure, it’s possible, Lisp did it long-long ago). - Looks like puzzle, can be used to write a lot of articles (there were articles about combinators, Jay/Cat/Scheme, etc, now there are a lot of Haskell articles – big interesting in academia. But IMHO academia interest to language can kill it too: Clean, Strongtalk, etc) - Stop! How to do I/O? Real programming?!! - Ohh, if we will wrap it in lambda and defer it to top level (main::IO ()), it will have I/O type (wrapper is hidden in type) - Let’s call it... Monad!! - Wow, cool! Works! Anybody should use monads! Does not your language have monads? Then we fly to you! (everybody forgot that monads are workaround of Haskell limitation and are not needed in another languages. Also they lead to low-performance code) - But how to compose them???!?! - We will wrap/unwrap, wrap/unwrap.. Let’s call it... transformers!!! “Monad transformers” – sounds super cool. Your language does not have “lift” operation, right? Ugh... - How to access records fields... How... That’s a question. ‘.’ - no! ‘#’ - no! Eureka! We will add several language extensions and voila! - To be continued... 😊 I love Haskell but I think such curve is absolutely impossible in commercial language. With IT managers 😊 To solve problem in a way when solution leads to another problem which needs new solution again and reason is only to keep lambda-abstraction-only (OK, Vanessa, backpacks also 😉) Can you imagine that all cars will have red color? Or any food will be sweet? It’s not technical question, but psychological and linguistic. Why native languages are not so limited? They even borrow words and forms from another one 😊 Haskell’s core team knows how better then me, and I respect a lot of Haskell users, most of them helped me A LOT (!!!). It’s not opinion even, because I don’t know what is a right way. Let’s call it observation and feeling of the future. I feel: Haskell has 3 cases: 1) to die 2) to change itself 3) to fork to another language How I see commercial successful Haskell-like language: - No monads, no transformers - There are dependent types, linear types - There are other evaluation models/abstractions (not only lambda) - Special syntax for records fields, etc - Less operators noise, language extensions (but it’s very disputable) - Solve problems with numerous from/to conversions (strings, etc) - Solve problems with libraries Last point needs explanation: - There is a lot of libraries written to check some type concepts only, no any business value. Also there are a lot of libraries written by students while they are learning Haskell: mostly without any business value/abandoned - There is situation when you have alternative libraries in one project due to dependencies (but should be one only, not both!) - Strange dependencies: I have installed Agda even! Why???! IMHO problems with libraries and lambda-only-abstraction lead to super slow compilation, big and complex compiler. So, currently I see (again, it’s my observation only) 2 big “camps”: 1. Academia, which has own interests, for example, to keep Haskell minimalistic (one-only-abstraction). Trade-off only was to add language extensions but they fragmentizes the language 2. Practical programmers, which interests are different from 1st “camp” Another my observation is: a lot of peoples tried Haskell and switched to another languages (C#, F#, etc) because they cannot use it for big enterprise projects (Haskell becomes hobby for small experiments or is dropped off). Joachim, I’m absolutely agreed that a big company can solve a lot of these problems. But some of them have already own languages (you can compare measure units in Haskell and in F#, what looks better...). When I said about killer app, I mean: devs like Ruby not due to syntax but RoR. The same Python: sure, Python syntax is very good, but without Zope, Django, TurboGears, SQLAlchemy, Twisted, Tornado, Cheetah, Jinja, etc – nobody will use Python. Sure, there are exceptions: Delphi, CBuilder, for example. But this is bad karma of Borland 😊 They had a lot of compilers (pascal, prolog, c/c++, etc), but... On the other hand after reincarnation we have C# 😊 Actually all these are only observations: nobody knows the future. /Best regards, Paul From: Joachim Durchholz Sent: 13 июля 2018 г. 21:49 To: haskell-cafe@haskell.org Subject: Re: [Haskell-cafe] Investing in languages (Was: What is yourfavourite Haskell "aha" moment?) Am 13.07.2018 um 09:38 schrieb PY:
1. Haskell limits itself to lambda-only. Example, instead to add other abstractions and to become modern MULTI-paradigm languages,
"modern"? That's not an interesting property. "maintainable", "expressive" - THESE are interesting. Multi-paradigm can help, but if overdone can hinder it - the earliest multi-paradigm language I'm aware of was PL/I, and that was a royal mess I hear.
So, point #1 is limitation in abstraction: monads, transformers, anything - is function. It's not good.
Actually limiting yourself to a single abstraciton tool can be good. This simplifies semantics and makes it easier to build stuff on top of it. Not that I'm saying that this is necessarily the best thing.
There were such languages already: Forth, Joy/Cat, APL/J/K... Most of them look dead. Which proves nothing, because many multi-paradigm languages look dead, too.
When you try to be elegant, your product (language) died. Proven by Lisp... er, disproven.
This is not my opinion, this is only my observation. People like diversity and variety: in food, in programming languages, in relations, anywhere :)
Not in programming languages. Actually multi-paradigm is usually a bad idea. It needs to be done in an excellent fashion to create something even remotely usable, while a single-paradigm language is much easier to do well. And in practice, bad language design has much nastier consequences than leaving out some desirable feature.
2. When language has killer app and killer framework, IMHO it has more chances. But if it has _killer ideas_ only... So, those ideas will be re-implemented in other languages and frameworks but with more simple and typical syntax :)
"Typical" is in the eye of the beholder, so that's another non-argument.
It's difficult to compete with product, framework, big library, but it's easy to compete with ideas. It's an observation too :-)
Sure, but Haskell has product, framework, big library. What's missing is commitment by a big company, that's all. Imagine Google adopting Haskell, committing to building libraries and looking for Haskell programmers in the streets - all of a sudden, Haskell is going to be the talk of the day. (Replace "Google" with whatever big-name company with deep pockets: Facebook, MS, IBM, you name it.)
language itself is not argument for me.
You are arguing an awful lot about missing language features ("multi-paradigm") to credibly make that statement.
Argument for me (I am usual developer) are killer apps/frameworks/libraries/ecosystem/etc. Currently Haskell has stack only - it's very good, but most languages has similar tools (not all have LTS analogue, but big frameworks are the same).
Yeah, a good library ecosystem is very important, and from the reports I see on this list it's not really good enough. The other issue is that Haskell's extensions make it more difficult to have library code interoperate. Though that's a trade-off: The freedom to add language features vs. full interoperability. Java opted for the other opposite: 100% code interoperability at the cost of a really annoying language evolution process, and that gave it a huge library ecosystem. But... I'm not going to make the Haskell developers' decisions. If they don't feel comfortable with reversing the whole culture and make interoperability trump everything else, then I'm not going to blame them. I'm not even going to predict anything about Haskell's future, because my glass orb is out for repairs and I cannot currently predict the future. Regards, Jo _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Hi Paul,
As someone who uses Haskell commercially for a couple of years now, I think
I have something to add.
A small disclaimer: none of the members of our team has an academic
background. We all have different backgrounds: C#, Java, Ruby, Python, C,
even Perl if I am not mistaken. Yet we ended up with FP first, and then
with Haskell.
We have switched to Haskell from Scala, which _is_ a multi-paradigm
language borrowing bits and pieces from other languages/paradigms and
mixing them together. It is an enormously hard work to do it and for that,
I very much respect Odersky and the core Scala team. But at the same time,
Scala is a very complicated language where pretty much nothing feels
consistent or conceptually finished. Too many things in Scala can get you
only that far before they become unprincipled. And then it justs stops and
you have to find a workaround.
This isn't a problem with Scala particularly though, it is exactly a
problem of borrowing bits of foreign concepts: they simply don't fuse into
one cohesive system.
As a result, the language becomes overly complicated and less useful.
In his talk (https://www.youtube.com/watch?v=v8IQ-X2HkGE) John De Goes
explains exactly the same problem, talking about Scala, but the reasoning
is applicable generally. One of the conclusions is very interesting: the
mix, the middle ground between different paradigms is often imagined as a
"paradise" that attracts peoples from all the sides. However, in reality,
it is the place where no one is happy, no one is really interested in it,
no one wants it. Watch the talk, it is fun.
Also: https://twitter.com/chris__martin/status/1016462268073209856
I bring this in because you name lots of languages, but when you speak
about "borrowing" you don't name any. So I did. We lived throuh such a
life, we felt it. Thank you very much, no desire to repeat it again.
I personally went through F# in my career (and I am still coming back to
F#, and I am writing F# right now). With it, I have not exactly the same,
but similar experience.
Your joke about how Haskell has been made misses one point: it was
initially designed as a lazy language (at least as far as I know). Many
features that Haskell has now are there because of laziness: if you want to
be lazy, then you have to be pure, you have to sequence your effects, etc.
"Let's defer lambda, name it IO and let's call it Monad" - this bit isn't
even funny. Monad isn't IO. IO happens to be a monad (as many things do,
List as an example), but monad isn't IO and has nothing to do with IO. A
horse is classified as Mammal, but Mammal doesn't mean horse _at all_.
Now back to your items:
- No monads, no transformers
In a context of a lazy language, you need to sequence your effects
(including side effects), that's the first point. The second is that
instead of disappearing from Haskell, monads (and other concepts) are
making their way to other languages. Scala has them, F# has them, even C#
has them (however indirectly). Try to take away List Monad from C#
developers and they'll kill you ;)
Watch "What Haskell taught us when we were not looking" by Eric Torreborre (
https://www.youtube.com/watch?v=aNL3137C74c)
Also: https://twitter.com/Horusiath/status/1016593663626014720?s=09 :)
- There are dependent types, linear types
Agree here. And they are coming to Haskell, naturally and without breaking
it dramatically into another language.
- There are other evaluation models/abstractions (not only lambda)
I think I made my point above.
- Special syntax for records fields, etc
That would be nice. Although even coming from C# background, I don't
entirely understand why it is _that much_ important for some people. For me
personally, it is just a bit of annoyance. I have never felt like "Oh, I
could achieve so much more if only I had records!" :) Lenses and generic
lenses help, so be it. But I don't think that anything prevents Haskell
from having it, and I don't think that Haskell as a language needs a
dramatic change as you depict to make it happen. Just a feature.
- Less operators noise, language extensions (but it’s very disputable)
I don't agree that operators are noise. You certainly can write Haskell
almost without operators if you wish.
As for extensions, I think that many more should be just switched on by
default.
- Solve problems with numerous from/to conversions (strings, etc)
You mean that conversion should happen implicitly? Thank you, but no, thank
you. This is a source of problems in many languages, and it is such a great
thing that Haskell doesn't coerce types implicitly.
- Solve problems with libraries
Surely, more libraries are better. I don't understand this "no business
value" statement. Value for which business? What does it mean "check types,
no business value"?
I also strongly disagree that there should only be one library for one
thing. Do you know many ways are there to, say, parse XML? Or Json? How
many different tradeoffs can be made while implementing a
parser/serialiser? Some are valuable in one "business" but totally not
acceptable in another, and vice versa. In our company, we have a set of
requirements such that _none_ of the tradeoffs made by the existing XML
parsers would satisfy them. So we had to write our own. Deliver business
value, yeah. And you say that there should only be one XML parser for a
given language? Absurd.
I think Haskell has only one problem with libraries: we need more of them.
And then it falls into a famous joke: "The problem with Open Source
Software is YOU because YOU are not contributing" :) Meaning that if we
want more good libs then we should write more good libs :)
Regards,
Alexey.
On Sat, Jul 14, 2018 at 5:05 PM Paul
I understand that my points are disputable, sure, example, multi-pardigm Oz – dead 😊 Any rule has exceptions. But my point was that people don’t like elegant and one-abstraction languages. It’s my observation. For me, Smalltalk was good language (mostly dead, except Pharo, which looks cool). Forth – high-level “stack-around-assembler”, mostly dead (Factor looks abandoned, only 8th looks super cool, but it’s not free). What else? Lisp? OK, there are SBCL, Clojure, Racket... But you don’t find even Clojure in languages trends usually. APL, J – super cool! Seems dead (I don’t know what happens with K). ML, SML? By the way, Haskell role was to kill SML community, sure it is sad to acknowledge it, but it’s 100% true...
Haskell try to be minimalistic and IMHO this can lead to death. Joachim, I’m not talking “it’s good/it’s bad”, “multiparadigm is good” or else... I don’t know what is right. It’s my observations only. Looks like it can happen.
If we will look to Haskell history then we see strange curve. I’ll try to describe it with humour, so, please, don;t take it seriously 😊
- Let’s be pure lambda fanatics! - Is it possible to create a big application? - Is it possible to compile and optimize it?! - Let’s try... - Wow, it’s possible!!! (sure, it’s possible, Lisp did it long-long ago). - Looks like puzzle, can be used to write a lot of articles (there were articles about combinators, Jay/Cat/Scheme, etc, now there are a lot of Haskell articles – big interesting in academia. But IMHO academia interest to language can kill it too: Clean, Strongtalk, etc) - Stop! How to do I/O? Real programming?!! - Ohh, if we will wrap it in lambda and defer it to top level (main::IO ()), it will have I/O type (wrapper is hidden in type) - Let’s call it... Monad!! - Wow, cool! Works! Anybody should use monads! Does not your language have monads? Then we fly to you! (everybody forgot that monads are workaround of Haskell limitation and are not needed in another languages. Also they lead to low-performance code) - But how to compose them???!?! - We will wrap/unwrap, wrap/unwrap.. Let’s call it... transformers!!! “Monad transformers” – sounds super cool. Your language does not have “lift” operation, right? Ugh... - How to access records fields... How... That’s a question. ‘.’ - no! ‘#’ - no! Eureka! We will add several language extensions and voila! - To be continued... 😊
I love Haskell but I think such curve is absolutely impossible in commercial language. With IT managers 😊 To solve problem in a way when solution leads to another problem which needs new solution again and reason is only to keep lambda-abstraction-only (OK, Vanessa, backpacks also 😉) Can you imagine that all cars will have red color? Or any food will be sweet? It’s not technical question, but psychological and linguistic. Why native languages are not so limited? They even borrow words and forms from another one 😊
Haskell’s core team knows how better then me, and I respect a lot of Haskell users, most of them *helped me A LOT* (!!!). It’s not opinion even, because I don’t know what is a right way. Let’s call it observation and feeling of the future.
I feel: Haskell has 3 cases: 1) to die 2) to change itself 3) to fork to another language
How I see commercial successful Haskell-like language:
- No monads, no transformers - There are dependent types, linear types - There are other evaluation models/abstractions (not only lambda) - Special syntax for records fields, etc - Less operators noise, language extensions (but it’s very disputable) - Solve problems with numerous from/to conversions (strings, etc) - Solve problems with libraries
Last point needs explanation:
- There is a lot of libraries written to check some type concepts only, no any business value. Also there are a lot of libraries written by students while they are learning Haskell: mostly without any business value/abandoned - There is situation when you have alternative libraries in one project due to dependencies (but should be one only, not both!) - Strange dependencies: I have installed Agda even! Why???!
IMHO problems with libraries and lambda-only-abstraction lead to super slow compilation, big and complex compiler.
So, currently I see (again, it’s my observation only) 2 big “camps”:
1. Academia, which has own interests, for example, to keep Haskell minimalistic (one-only-abstraction). Trade-off only was to add language extensions but they fragmentizes the language 2. Practical programmers, which interests are different from 1st “camp”
Another my observation is: a lot of peoples tried Haskell and switched to another languages (C#, F#, etc) because they cannot use it for big enterprise projects (Haskell becomes hobby for small experiments or is dropped off).
Joachim, I’m absolutely agreed that a big company can solve a lot of these problems. But some of them have already own languages (you can compare measure units in Haskell and in F#, what looks better...).
When I said about killer app, I mean: devs like Ruby not due to syntax but RoR. The same Python: sure, Python syntax is very good, but without Zope, Django, TurboGears, SQLAlchemy, Twisted, Tornado, Cheetah, Jinja, etc – nobody will use Python. Sure, there are exceptions: Delphi, CBuilder, for example. But this is bad karma of Borland 😊 They had a lot of compilers (pascal, prolog, c/c++, etc), but... On the other hand after reincarnation we have C# 😊 Actually all these are only observations: nobody knows the future.
/Best regards, Paul
*From: *Joachim Durchholz
*Sent: *13 июля 2018 г. 21:49 *To: *haskell-cafe@haskell.org *Subject: *Re: [Haskell-cafe] Investing in languages (Was: What is yourfavourite Haskell "aha" moment?) Am 13.07.2018 um 09:38 schrieb PY:
1. Haskell limits itself to lambda-only. Example, instead to add other
abstractions and to become modern MULTI-paradigm languages,
"modern"?
That's not an interesting property.
"maintainable", "expressive" - THESE are interesting. Multi-paradigm can
help, but if overdone can hinder it - the earliest multi-paradigm
language I'm aware of was PL/I, and that was a royal mess I hear.
So, point #1 is limitation in
abstraction: monads, transformers, anything - is function. It's not
good.
Actually limiting yourself to a single abstraciton tool can be good.
This simplifies semantics and makes it easier to build stuff on top of it.
Not that I'm saying that this is necessarily the best thing.
There were such languages already: Forth, Joy/Cat, APL/J/K... Most of
them look dead.
Which proves nothing, because many multi-paradigm languages look dead, too.
When you try to be elegant, your product (language) died.
Proven by Lisp... er, disproven.
This is not my opinion, this is only my observation. People like
diversity and variety: in food, in programming languages, in relations,
anywhere :)
Not in programming languages.
Actually multi-paradigm is usually a bad idea. It needs to be done in an
excellent fashion to create something even remotely usable, while a
single-paradigm language is much easier to do well.
And in practice, bad language design has much nastier consequences than
leaving out some desirable feature.
2. When language has killer app and killer framework, IMHO it has more
chances. But if it has _killer ideas_ only... So, those ideas will be
re-implemented in other languages and frameworks but with more simple
and typical syntax :)
"Typical" is in the eye of the beholder, so that's another non-argument.
It's difficult to compete with product,
framework, big library, but it's easy to compete with ideas. It's an
observation too :-)
Sure, but Haskell has product, framework, big library.
What's missing is commitment by a big company, that's all. Imagine
Google adopting Haskell, committing to building libraries and looking
for Haskell programmers in the streets - all of a sudden, Haskell is
going to be the talk of the day. (Replace "Google" with whatever
big-name company with deep pockets: Facebook, MS, IBM, you name it.)
language itself is not argument for me.
You are arguing an awful lot about missing language features
("multi-paradigm") to credibly make that statement.
Argument for me (I
am usual developer) are killer apps/frameworks/libraries/ecosystem/etc.
Currently Haskell has stack only - it's very good, but most languages
has similar tools (not all have LTS analogue, but big frameworks are the
same).
Yeah, a good library ecosystem is very important, and from the reports I
see on this list it's not really good enough.
The other issue is that Haskell's extensions make it more difficult to
have library code interoperate. Though that's a trade-off: The freedom
to add language features vs. full interoperability. Java opted for the
other opposite: 100% code interoperability at the cost of a really
annoying language evolution process, and that gave it a huge library
ecosystem.
But... I'm not going to make the Haskell developers' decisions. If they
don't feel comfortable with reversing the whole culture and make
interoperability trump everything else, then I'm not going to blame
them. I'm not even going to predict anything about Haskell's future,
because my glass orb is out for repairs and I cannot currently predict
the future.
Regards,
Jo
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Hello Alex!
A small disclaimer: none of the members of our team has an academic background. We all have different backgrounds: C#, Java, Ruby, Python, C, even Perl if I am not mistaken. Yet we ended up with FP first, and then with Haskell. We have switched to Haskell from Scala, which _is_ a multi-paradigm language borrowing bits and pieces from other languages/paradigms and mixing them together. It is an enormously hard work to do it and for that, I very much respect
Oh, my 1st question will be: did you try Eta, Frege? May be I’m wrong but Eta should support Haskell libraries as well as Java ones? They allow you to use libraries from the both world...
As a result, the language becomes overly complicated and less useful.
Yes, this is another side. You know, anything has several sides: good and bad...
Your joke about how Haskell has been made misses one point: it was initially designed as a lazy language (at least as far as I know). Many features that Haskell has now are there because of laziness: if you want to be lazy, then you have to be pure, you have to sequence your effects, etc.
True. Laziness makes Haskell unique. I think Haskell makes laziness so popular in modern languages although it was known long ago (as data in “infinite streams”, etc). I think, Miranda was lazy, so Haskell is lazy too 😊 And IMHO there was some lazy dialect of ML (may be, I’m not right).
"Let's defer lambda, name it IO and let's call it Monad" - this bit isn't even funny. Monad isn't IO. IO happens to be a monad (as many things do, List as an example), but monad isn't IO and has nothing to do with IO. A horse is classified as Mammal, but Mammal doesn't mean horse _at all_.
Sure. I mean, the need of side-effects (and firstly I/O) led to the monads.
In a context of a lazy language, you need to sequence your effects (including side effects), that's the first point. The second is that instead of disappearing from Haskell, monads (and other concepts) are making their way to other languages. Scala has them, F# has them, even C# has them (however indirectly). Try to take away List Monad from C# developers and they'll kill you ;)
Better IMHO to have less infrastructure code. Better is to hide all “machinery” in compiler. My point was that monads are workaround of Haskell problem, this was historically reason of their appearance. And if I have not such limitation in my language I don’t need any monads. What are the monad benefits in ML, for example? They are using in F#, but 1) comp. expressions are not monads but step forward, “monads++” and 2) they play different role in F#: simplifying of the code. And you can avoid them in all languages except Haskell. For example, Prolog can be “pure” and to do I/O without monads, also Clean can as well as F#. Monads have pros, sure, but they are not composable and workaround leads to another workaround – transformers. I’m not unique in my opinion: https://www.youtube.com/watch?v=rvRD_LRaiRs All of this looks like overengineering due to mentioned limitation. No such one in ML, F#. D has keyword “pure”, and didn’t introduce monads. Performance is very important feature of the language, that limitation is the reason #1 why Haskell has bad and unpredictable performance. “do”-block is not the same as “flat” block of C# statements and its performance is not the same. I can achieve Maybe effect with nullable+exceptions or ?-family operators, List with permutations/LINQ, guard with if+break/continue and to do it without sacrificing performance.. ListT/conduits – are just generators/enumerators. Benefit of monads IMHO is small, they are workaround of Haskell problem and are not needed in other languages. Sure, there are monads in Ocaml, Javascript, Python (as experimental libraries), but the reason is hype. Nobody will remember them after 5-10 years... Actually this is very-very subjective IMHHHHO 😊
Lenses and generic lenses help, so be it. But I don't think that anything prevents Haskell from having it, and I don't think that Haskell as a language needs a dramatic change as you depict to make it happen. Just a feature.
When I have legacy code, there are a lot of types which fields are not starting with “_” prefix, so I need to create lenses explicitly... “Infrastructure” code. What is the business value of such code: nothing. For non-Haskell programmer it looks like you try to solve non-existing problem 😊 (very-very provocative point: all Haskell solutions looks very overengineering. The reason is: lambda-abstraction-only. When you try to build something big from little pieces then the process will be very overengineering. Imagine that the pyramids are built of small bricks).
I don't agree that operators are noise. You certainly can write Haskell almost without operators if you wish.
Here I’m agree with D. Knuth ideas of literature programming: if code can not be easy read and understand on the hard-copy then used language is not fine. Haskell code needs help from IDE, types hints, etc. And I often meet a case when somebody does not understand what monads are in “do” blocks. Also there are a lot of operators in different libraries and no way to know what some operator means (different libraries, even different versions have own set of operators).
As for extensions, I think that many more should be just switched on by default.
+1
You mean that conversion should happen implicitly? Thank you, but no, thank you. This is a source of problems in many languages, and it is such a great thing that Haskell doesn't coerce types implicitly.
No... Actually, I have not idea what is better. Currently there are a lot of conversions. Some libraries functions expect String, another - Text, also ByteString, lazy/strict, the same with the numbers (word/int/integer). So, conversions happen often.
I don't understand this "no business value" statement. Value for which business? What does it mean "check types, no business value"?
There are libraries which nothing do in run-time. Only types playing. Only abstractions over types. And somebody says: oh man, see how many libraries has Haskell. But you can compare libraries of Haskell, Java, C#, Javascript, Perl, Python 😊 All libraries of Java, Python... have business value. Real-world functionality. Not abstract play with types. But more important point is a case with installed Agda 😊 or alternative libraries which does the same/similar things. The reason is important: Haskell moves a lot of functionality to libraries which is not good design IMHO. This is the root of the problem. Better is to have one good solid library bundled with GHC itself (“batteries included”) and only specific things will live in libraries and frameworks. Monads and monads transformers are central thing in Haskell. They a located in libraries. There is standard parser combinators in GHC itself, but you will have in the same project another one (or more than 1!). Etc, etc... Also installed GHC... Why is it so big!? IMHO it’s time to clear ecosystem, to reduce it to “batteries” 😊
And then it falls into a famous joke: "The problem with Open Source Software is YOU because YOU are not contributing" :) Meaning that if we want more good libs then we should write more good libs :)
1. Haskell limits itself to lambda-only. Example, instead to add other abstractions and to become modern MULTI-paradigm languages, "modern"? That's not an interesting property. "maintainable", "expressive" - THESE are interesting. Multi-paradigm can help, but if overdone can hinder it - the earliest multi-paradigm language I'm aware of was PL/I, and that was a royal mess I hear. So, point #1 is limitation in abstraction: monads, transformers, anything - is function. It's not good. Actually limiting yourself to a single abstraciton tool can be good. This simplifies semantics and makes it easier to build stuff on top of it. Not that I'm saying that this is necessarily the best thing. There were such languages already: Forth, Joy/Cat, APL/J/K... Most of them look dead. Which proves nothing, because many multi-paradigm languages look dead, too. When you try to be elegant, your product (language) died. Proven by Lisp... er, disproven. This is not my opinion, this is only my observation. People like diversity and variety: in food, in programming languages, in relations, anywhere :) Not in programming languages. Actually multi-paradigm is usually a bad idea. It needs to be done in an excellent fashion to create something even remotely usable, while a single-paradigm language is much easier to do well. And in practice, bad language design has much nastier consequences than leaving out some desirable feature. 2. When language has killer app and killer framework, IMHO it has more chances. But if it has _killer ideas_ only... So, those ideas will be re-implemented in other languages and frameworks but with more simple and typical syntax :) "Typical" is in the eye of the beholder, so that's another non-argument. It's difficult to compete with product, framework, big library, but it's easy to compete with ideas. It's an observation too :-) Sure, but Haskell has product, framework, big library. What's missing is commitment by a big company, that's all. Imagine Google adopting Haskell, committing to building libraries and looking for Haskell programmers in the streets - all of a sudden, Haskell is going to be the talk of the day. (Replace "Google" with whatever big-name company with deep pockets: Facebook, MS, IBM, you name it.) language itself is not argument for me. You are arguing an awful lot about missing language features ("multi-paradigm") to credibly make that statement. Argument for me (I am usual developer) are killer apps/frameworks/libraries/ecosystem/etc. Currently Haskell has stack only - it's very good, but most languages has similar tools (not all have LTS analogue, but big frameworks are the same). Yeah, a good library ecosystem is very important, and from the reports I see on this list it's not really good enough. The other issue is that Haskell's extensions make it more difficult to have library code interoperate. Though that's a trade-off: The freedom to add language features vs. full interoperability. Java opted for the other opposite: 100% code interoperability at the cost of a really annoying language evolution process, and that gave it a huge library ecosystem. But... I'm not going to make the Haskell developers' decisions. If they don't feel comfortable with reversing the whole culture and make interoperability trump everything else, then I'm not going to blame
Absolutely true 😊
On Sat, Jul 14, 2018 at 5:05 PM Paul

On Sat, Jul 14, 2018 at 11:28 AM Paul
Better IMHO to have less infrastructure code. Better is to hide all “machinery” in compiler.
Hiding all "machinery" in the compiler leads to perl 5, PL/1, and similar monoliths. Which, if they do manage to catch on, eventually get discarded because the "compiler" can't keep up with the rest of the world without becoming a completely different language… which will move everything into the ecosystem so it can keep up. Monoliths have one advantage: people can ignore all the stuff going inside the monolith, and therefore think they're easier to work with. Until they no longer do what those people need, and they get tossed on the trash heap, never to be seen again. The languages that stick around, that are still used, are the ones that are extensible instead of being monoliths. -- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

Le 14/07/2018 à 17:28, Paul ["aquagnu"]a écrit :
By the way, Haskell role was to kill SML community, sure it is sad to acknowledge it, but it’s 100% true...
Could you please cite some */serious/* source supporting this claim? ===
And IMHO there was some lazy dialect of ML (may be, I’m not right).
LML of Lennart Augustsson and Thomas Johnsson, 1984 ("LISP and Functional Programming"). It came before Miranda (1985)... If you are not certain, read something, please...
Smalltalk was good language (*mostly dead*, except Pharo, which looks cool). Well, Squeak is alive, its "children": Scratch, Snap, etc, as well. Pharo is a fork of Squeak as well. The European Smalltalk User Group organizes in September quite a big conference in Cagliari. It seems that you are exaggerating a bit, killing all the languages you have heard [a little...] about...
Jerzy Karczmarczuk /Caen, France/ --- L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast. https://www.avast.com/antivirus

Oh, my 1st question will be: did you try Eta, Frege? Yes, I tried Eta for some time, implemented a couple of Kafka-related libraries in it (Kafka Client, Schema Registry, etc.) Eta is Haskell 7.10.3 with C FFI replaced with Java FFI (plus some features ported from GHC 8)
Eta should support Haskell libraries as well as Java ones? Eta does. Through a very nice FFI. But so does Haskell. We have nice FFI to use C libs. I maintain a couple of libs that use it extensively, works quite well. There are also things like "inline-c", "inline-r", "inline-java" in Haskell, so we CAN call to another languages/paradigms quite nicely. I don't see your point here making parallels with Eta.
expressions are not monads but step forward, “monads++” Can I have a definition and laws of "monad++"? Otherwise, I don't understand what you are talking about. If it obeys monadic laws it is a monad. But I'll wait for the definition.
Prolog can be “pure” and to do I/O without monads, also Clean can as well as F#. But it is not lazy - one. Remember, laziness is our requirement here. Whatever you propose _must _ work in a context of laziness. Second, the inability to track side effects in F# is not "simplification" and is not a benefit, but rather a limitation and a disadvantage of the language and its type system. It is _just a bit_ less dramatic in F# because F# is not lazy, but it is quite a miss anyway. Third, AFAIK CLR restrictions do not allow implementing things like Functor, Monad, etc. in F# directly because they can't support HKT. So they workaround the problem. But again, F# cannot be expressive enough: no HKT, no ability to express constraints, no ability to track effects...
I’m not unique in my opinion: https://www.youtube.com/watch?v=rvRD_LRaiRs I've watched this talk and I haven't got the point. I don't take "unfamiliar means bad" or "unfamiliar means complex" points.
overengineering due to mentioned limitation. No such one in ML, F#. Really? You keep mentioning F#, and I struggle with it right now _because_ of such limitations. There are no meaningful ways abstract over generics, it is impossible to reason about functions' behaviours from their type signatures (because side effects can happen anywhere), it has Option, but you still can get Null, you can't have constraints, etc., etc. It is sooooo muuuuch mooore limited.
I can achieve Maybe effect with nullable+exceptions or ?-family operators, No, you can't.
Nobody will remember them after 5-10 years... OCaml exists for 22 years now, doing well and solves problems it has been designed for very well. So _already_ more than twice compare to your prediction.
fields are not starting with “_” prefix, so I need to create lenses explicitly No you don't. You don't have to have "_" prefix to generate a lense. You have total control here.
What is the business value of such code: nothing Can you define "business value" please? You mention it for a couple of times, so I am puzzled. Otherwise, it reminds me of https://twitter.com/newhoggy/status/999930802589724672
For non-Haskell programmer it looks like you try to solve non-existing problem For Haskell programmers, Java solves non-existing problems all the time :) Every single time you see on twitter or here something like "I replaced hundreds of lines of Java code with a single 'traverse'" you get the proof. And it happens so often. Also, what exactly is that "non-existent problem"? In an immutable environment (and C# gives you that, F# does, even JS does), how do these languages solve that non-existing problem that lens setters do? I'll answer this question myself: they don't. So even in JavaScript my non-FP colleagues working on frontend in JavaScript use JS lens library. Precisely because it _does_ solve their very existing problems.
Haskell code needs help from IDE, types hints, etc. Types are written and read by programmers. Java is impossible without IDE. What is the point here?
And I often meet a case when somebody does not understand what monads are in “do” blocks. Familiarity again. They learn and then they understand. I don't understand C++. I barely understand C. I don't understand Ruby (every time I have to work with it, it is a nightmare for me). I don't understand Erlang. Does it mean that all these languages are bad and need to die or change? Or does it just mean that I am ignorant to learn them? I thinlk it is second. I also understand English, I can understand Russian but not Japanese or Swiss German. Or Tajik. Does it mean that Tajik is complicated? No, it is a very simple language I used to speak fluently when I was a child :) Familiarity means nothing. I repeat: don't bring familiarity, it means nothing.
Text, also ByteString, lazy/strict, the same with the numbers (word/int/integer) But the problem is not with Text/ByteString! The problem is with the perception that there must be just one string that rules them all! But it is wrong! These types are different, for different use cases and with different tradeoffs! It is good that I can choose what I need according to my requirements. Int/Word are not even close to being the same! Words cannot be negative, at least. If you do bits manipulations you want Words, not Ints! Java doesn't have unsigned numbers, so bits manipulations are insanely hard in Java since you _always_ need account to the sign bits. This the _real_ problem.
Better is to have one good solid library bundled with GHC itself (“batteries included”) and only specific things will live in libraries and frameworks. Better for whom? Definitely NOT better for me and my team using Haskell commercially. Again, to effectively meet requirements, functional and non-functional, we don't want just a mediocre compromise thing. I gave you an example with parsers already: different parsers have different tradeoffs. It is often a GOOD thing that there are many different libraries doing the same thing differently.
As a person using Haskell commercially, I specifically don't want a
"batteries included, one way of doing things" solution.
Regards,
Alexey.
On Sun, Jul 15, 2018 at 1:28 AM Paul
Hello Alex!
A small disclaimer: none of the members of our team has an academic background. We all have different backgrounds: C#, Java, Ruby, Python, C, even Perl if I am not mistaken. Yet we ended up with FP first, and then with Haskell.
We have switched to Haskell from Scala, which _is_ a multi-paradigm language borrowing bits and pieces from other languages/paradigms and mixing them together. It is an enormously hard work to do it and for that, I very much respect
Oh, my 1st question will be: did you try Eta, Frege? May be I’m wrong but Eta should support Haskell libraries as well as Java ones? They allow you to use libraries from the both world...
As a result, the language becomes overly complicated and less useful.
Yes, this is another side. You know, anything has several sides: good and bad...
Your joke about how Haskell has been made misses one point: it was initially designed as a lazy language (at least as far as I know). Many features that Haskell has now are there because of laziness: if you want to be lazy, then you have to be pure, you have to sequence your effects, etc.
True. Laziness makes Haskell unique. I think Haskell makes laziness so popular in modern languages although it was known long ago (as data in “infinite streams”, etc). I think, Miranda was lazy, so Haskell is lazy too 😊 And IMHO there was some lazy dialect of ML (may be, I’m not right).
"Let's defer lambda, name it IO and let's call it Monad" - this bit isn't even funny. Monad isn't IO. IO happens to be a monad (as many things do, List as an example), but monad isn't IO and has nothing to do with IO. A horse is classified as Mammal, but Mammal doesn't mean horse _at all_.
Sure. I mean, the need of side-effects (and firstly I/O) led to the monads.
In a context of a lazy language, you need to sequence your effects (including side effects), that's the first point. The second is that instead of disappearing from Haskell, monads (and other concepts) are making their way to other languages. Scala has them, F# has them, even C# has them (however indirectly). Try to take away List Monad from C# developers and they'll kill you ;)
Better IMHO to have less infrastructure code. Better is to hide all “machinery” in compiler.
My point was that monads are workaround of Haskell problem, this was historically reason of their appearance. And if I have not such limitation in my language I don’t need any monads. What are the monad benefits in ML, for example? They are using in F#, but 1) comp. expressions are not monads but step forward, “monads++” and 2) they play different role in F#: simplifying of the code. And you *can* avoid them in all languages except Haskell. For example, Prolog can be “pure” and to do I/O without monads, also Clean can as well as F#. Monads have pros, sure, but they are not composable and workaround leads to another workaround – transformers. I’m not unique in my opinion: https://www.youtube.com/watch?v=rvRD_LRaiRs All of this looks like overengineering due to mentioned limitation. No such one in ML, F#. D has keyword “pure”, and didn’t introduce monads. Performance is very important feature of the language, that limitation is the reason #1 why Haskell has bad and unpredictable performance. “do”-block is not the same as “flat” block of C# statements and its performance is not the same. I can achieve Maybe effect with nullable+exceptions or ?-family operators, List with permutations/LINQ, guard with if+break/continue and to do it without sacrificing performance.. ListT/conduits – are just generators/enumerators. Benefit of monads IMHO is small, they are workaround of Haskell problem and are not needed in other languages. Sure, there are monads in Ocaml, Javascript, Python (as experimental libraries), but the reason is hype. Nobody will remember them after 5-10 years...
Actually this is very-very subjective IMHHHHO 😊
Lenses and generic lenses help, so be it. But I don't think that anything prevents Haskell from having it, and I don't think that Haskell as a language needs a dramatic change as you depict to make it happen. Just a feature.
When I have legacy code, there are a lot of types which fields are not starting with “_” prefix, so I need to create lenses explicitly... “Infrastructure” code. What is the business value of such code: nothing. For non-Haskell programmer it looks like you try to solve non-existing problem 😊 (*very-very provocative point: all Haskell solutions looks very overengineering. The reason is: lambda-abstraction-only. When you try to build something big from little pieces then the process will be very overengineering. Imagine that the pyramids are built of small bricks*).
I don't agree that operators are noise. You certainly can write Haskell almost without operators if you wish.
Here I’m agree with D. Knuth ideas of literature programming: if code can not be easy read and understand on the hard-copy then used language is not fine. Haskell code needs help from IDE, types hints, etc. And I often meet a case when somebody does not understand what monads are in “do” blocks. Also there are a lot of operators in different libraries and no way to know what some operator means (different libraries, even different versions have own set of operators).
As for extensions, I think that many more should be just switched on by default.
+1
You mean that conversion should happen implicitly? Thank you, but no, thank you. This is a source of problems in many languages, and it is such a great thing that Haskell doesn't coerce types implicitly.
No... Actually, I have not idea what is better. Currently there are a lot of conversions. Some libraries functions expect String, another - Text, also ByteString, lazy/strict, the same with the numbers (word/int/integer). So, conversions happen often.
I don't understand this "no business value" statement. Value for which business? What does it mean "check types, no business value"?
There are libraries which nothing do in run-time. Only types playing. Only abstractions over types. And somebody says: oh man, see how many libraries has Haskell. But you can compare libraries of Haskell, Java, C#, Javascript, Perl, Python 😊 All libraries of Java, Python... have business value. Real-world functionality. Not abstract play with types. But more important point is a case with installed Agda 😊 or alternative libraries which does the same/similar things. The reason is important: Haskell moves a lot of functionality to libraries which is not good design IMHO. This is the root of the problem. Better is to have one good solid library bundled with GHC itself (“batteries included”) and only specific things will live in libraries and frameworks. Monads and monads transformers are central thing in Haskell. They a located in libraries. There is standard parser combinators in GHC itself, but you will have in the same project another one (or more than 1!). Etc, etc...
Also installed GHC... Why is it so big!? IMHO it’s time to clear ecosystem, to reduce it to “batteries” 😊
And then it falls into a famous joke: "The problem with Open Source Software is YOU because YOU are not contributing" :) Meaning that if we want more good libs then we should write more good libs :)
Absolutely true 😊
On Sat, Jul 14, 2018 at 5:05 PM Paul
wrote: I understand that my points are disputable, sure, example, multi-pardigm Oz – dead 😊 Any rule has exceptions. But my point was that people don’t like elegant and one-abstraction languages. It’s my observation. For me, Smalltalk was good language (mostly dead, except Pharo, which looks cool). Forth – high-level “stack-around-assembler”, mostly dead (Factor looks abandoned, only 8th looks super cool, but it’s not free). What else? Lisp? OK, there are SBCL, Clojure, Racket... But you don’t find even Clojure in languages trends usually. APL, J – super cool! Seems dead (I don’t know what happens with K). ML, SML? By the way, Haskell role was to kill SML community, sure it is sad to acknowledge it, but it’s 100% true...
Haskell try to be minimalistic and IMHO this can lead to death. Joachim, I’m not talking “it’s good/it’s bad”, “multiparadigm is good” or else... I don’t know what is right. It’s my observations only. Looks like it can happen.
If we will look to Haskell history then we see strange curve. I’ll try to describe it with humour, so, please, don;t take it seriously 😊
· Let’s be pure lambda fanatics!
· Is it possible to create a big application?
· Is it possible to compile and optimize it?!
· Let’s try...
· Wow, it’s possible!!! (sure, it’s possible, Lisp did it long-long ago).
· Looks like puzzle, can be used to write a lot of articles (there were articles about combinators, Jay/Cat/Scheme, etc, now there are a lot of Haskell articles – big interesting in academia. But IMHO academia interest to language can kill it too: Clean, Strongtalk, etc)
· Stop! How to do I/O? Real programming?!!
· Ohh, if we will wrap it in lambda and defer it to top level (main::IO ()), it will have I/O type (wrapper is hidden in type)
· Let’s call it... Monad!!
· Wow, cool! Works! Anybody should use monads! Does not your language have monads? Then we fly to you! (everybody forgot that monads are workaround of Haskell limitation and are not needed in another languages. Also they lead to low-performance code)
· But how to compose them???!?!
· We will wrap/unwrap, wrap/unwrap.. Let’s call it... transformers!!! “Monad transformers” – sounds super cool. Your language does not have “lift” operation, right? Ugh...
· How to access records fields... How... That’s a question. ‘.’ - no! ‘#’ - no! Eureka! We will add several language extensions and voila!
· To be continued... 😊
I love Haskell but I think such curve is absolutely impossible in commercial language. With IT managers 😊 To solve problem in a way when solution leads to another problem which needs new solution again and reason is only to keep lambda-abstraction-only (OK, Vanessa, backpacks also 😉) Can you imagine that all cars will have red color? Or any food will be sweet? It’s not technical question, but psychological and linguistic. Why native languages are not so limited? They even borrow words and forms from another one 😊
Haskell’s core team knows how better then me, and I respect a lot of Haskell users, most of them *helped me A LOT* (!!!). It’s not opinion even, because I don’t know what is a right way. Let’s call it observation and feeling of the future.
I feel: Haskell has 3 cases: 1) to die 2) to change itself 3) to fork to another language
How I see commercial successful Haskell-like language:
· No monads, no transformers
· There are dependent types, linear types
· There are other evaluation models/abstractions (not only lambda)
· Special syntax for records fields, etc
· Less operators noise, language extensions (but it’s very disputable)
· Solve problems with numerous from/to conversions (strings, etc)
· Solve problems with libraries
Last point needs explanation:
· There is a lot of libraries written to check some type concepts only, no any business value. Also there are a lot of libraries written by students while they are learning Haskell: mostly without any business value/abandoned
· There is situation when you have alternative libraries in one project due to dependencies (but should be one only, not both!)
· Strange dependencies: I have installed Agda even! Why???!
IMHO problems with libraries and lambda-only-abstraction lead to super slow compilation, big and complex compiler.
So, currently I see (again, it’s my observation only) 2 big “camps”:
1. Academia, which has own interests, for example, to keep Haskell minimalistic (one-only-abstraction). Trade-off only was to add language extensions but they fragmentizes the language
2. Practical programmers, which interests are different from 1st “camp”
Another my observation is: a lot of peoples tried Haskell and switched to another languages (C#, F#, etc) because they cannot use it for big enterprise projects (Haskell becomes hobby for small experiments or is dropped off).
Joachim, I’m absolutely agreed that a big company can solve a lot of these problems. But some of them have already own languages (you can compare measure units in Haskell and in F#, what looks better...).
When I said about killer app, I mean: devs like Ruby not due to syntax but RoR. The same Python: sure, Python syntax is very good, but without Zope, Django, TurboGears, SQLAlchemy, Twisted, Tornado, Cheetah, Jinja, etc – nobody will use Python. Sure, there are exceptions: Delphi, CBuilder, for example. But this is bad karma of Borland 😊 They had a lot of compilers (pascal, prolog, c/c++, etc), but... On the other hand after reincarnation we have C# 😊 Actually all these are only observations: nobody knows the future.
/Best regards, Paul
*From: *Joachim Durchholz
*Sent: *13 июля 2018 г. 21:49 *To: *haskell-cafe@haskell.org *Subject: *Re: [Haskell-cafe] Investing in languages (Was: What is yourfavourite Haskell "aha" moment?) Am 13.07.2018 um 09:38 schrieb PY:
1. Haskell limits itself to lambda-only. Example, instead to add other
abstractions and to become modern MULTI-paradigm languages,
"modern"?
That's not an interesting property.
"maintainable", "expressive" - THESE are interesting. Multi-paradigm can
help, but if overdone can hinder it - the earliest multi-paradigm
language I'm aware of was PL/I, and that was a royal mess I hear.
So, point #1 is limitation in
abstraction: monads, transformers, anything - is function. It's not
good.
Actually limiting yourself to a single abstraciton tool can be good.
This simplifies semantics and makes it easier to build stuff on top of it.
Not that I'm saying that this is necessarily the best thing.
There were such languages already: Forth, Joy/Cat, APL/J/K... Most of
them look dead.
Which proves nothing, because many multi-paradigm languages look dead, too.
When you try to be elegant, your product (language) died.
Proven by Lisp... er, disproven.
This is not my opinion, this is only my observation. People like
diversity and variety: in food, in programming languages, in relations,
anywhere :)
Not in programming languages.
Actually multi-paradigm is usually a bad idea. It needs to be done in an
excellent fashion to create something even remotely usable, while a
single-paradigm language is much easier to do well.
And in practice, bad language design has much nastier consequences than
leaving out some desirable feature.
2. When language has killer app and killer framework, IMHO it has more
chances. But if it has _killer ideas_ only... So, those ideas will be
re-implemented in other languages and frameworks but with more simple
and typical syntax :)
"Typical" is in the eye of the beholder, so that's another non-argument.
It's difficult to compete with product,
framework, big library, but it's easy to compete with ideas. It's an
observation too :-)
Sure, but Haskell has product, framework, big library.
What's missing is commitment by a big company, that's all. Imagine
Google adopting Haskell, committing to building libraries and looking
for Haskell programmers in the streets - all of a sudden, Haskell is
going to be the talk of the day. (Replace "Google" with whatever
big-name company with deep pockets: Facebook, MS, IBM, you name it.)
language itself is not argument for me.
You are arguing an awful lot about missing language features
("multi-paradigm") to credibly make that statement.
Argument for me (I
am usual developer) are killer apps/frameworks/libraries/ecosystem/etc.
Currently Haskell has stack only - it's very good, but most languages
has similar tools (not all have LTS analogue, but big frameworks are the
same).
Yeah, a good library ecosystem is very important, and from the reports I
see on this list it's not really good enough.
The other issue is that Haskell's extensions make it more difficult to
have library code interoperate. Though that's a trade-off: The freedom
to add language features vs. full interoperability. Java opted for the
other opposite: 100% code interoperability at the cost of a really
annoying language evolution process, and that gave it a huge library
ecosystem.
But... I'm not going to make the Haskell developers' decisions. If they
don't feel comfortable with reversing the whole culture and make
interoperability trump everything else, then I'm not going to blame
them. I'm not even going to predict anything about Haskell's future,
because my glass orb is out for repairs and I cannot currently predict
the future.
Regards,
Jo
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Alexey Raga writes:
As a person using Haskell commercially, I specifically don't want a "batteries included, one way of doing things" solution.
I usually do not send "I agree" messages. But, I without reservation agree. -- Brett M. Gilio Free Software Foundation, Member https://parabola.nu | https://emacs.org

Am 15.07.2018 um 08:44 schrieb Alexey Raga:
If you do bits manipulations you want Words, not Ints! Java doesn't have unsigned numbers, so bits manipulations are insanely hard in Java since you _always_ need account to the sign bits. This the _real_ problem.
You don't use bit manipulation in (idiomatic) Java, you use EnumSets (use case 1) or one of the various infinite-digits math libraries (use case 2). I have yet to see a third use case, and there's already libraries for these, so I do not see a problem. All of which, of course, just elaborates the point you were making around this paragraph: Each language lives in its own niche, and when coming from a different niche one usually sees all kinds of problems but has yet to encounter the solutions. Regards, Jo

A small disclaimer: none of the members of our team has an academic background. We all have different backgrounds: C#, Java, Ruby, Python, C, even Perl if I am not mistaken. Yet we ended up with FP first, and then with Haskell. We have switched to Haskell from Scala, which _is_ a multi-paradigm language borrowing bits and pieces from other languages/paradigms and mixing them together. It is an enormously hard work to do it and for that, I very much respect Oh, my 1st question will be: did you try Eta, Frege? May be I’m wrong but Eta should support Haskell libraries as well as Java ones? They allow you to use libraries from the both world... As a result, the language becomes overly complicated and less useful. Yes, this is another side. You know, anything has several sides: good and bad... Your joke about how Haskell has been made misses one point: it was initially designed as a lazy language (at least as far as I know). Many features that Haskell has now are there because of laziness: if you want to be lazy, then you have to be pure, you have to sequence your effects, etc. True. Laziness makes Haskell unique. I think Haskell makes laziness so popular in modern languages although it was known long ago (as data in “infinite streams”, etc). I think, Miranda was lazy, so Haskell is lazy too 😊 And IMHO there was some lazy dialect of ML (may be, I’m not right). "Let's defer lambda, name it IO and let's call it Monad" - this bit isn't even funny. Monad isn't IO. IO happens to be a monad (as many things do, List as an example), but monad isn't IO and has nothing to do with IO. A horse is classified as Mammal, but Mammal doesn't mean horse _at all_. Sure. I mean, the need of side-effects (and firstly I/O) led to the monads. In a context of a lazy language, you need to sequence your effects (including side effects), that's the first point. The second is that instead of disappearing from Haskell, monads (and other concepts) are making their way to other languages. Scala has them, F# has them, even C# has them (however indirectly). Try to take away List Monad from C# developers and they'll kill you ;) Better IMHO to have less infrastructure code. Better is to hide all “machinery” in compiler. My point was that monads are workaround of Haskell problem, this was historically reason of their appearance. And if I have not such limitation in my language I don’t need any monads. What are the monad benefits in ML, for example? They are using in F#, but 1) comp. expressions are not monads but step forward, “monads++” and 2) they play different role in F#: simplifying of the code. And you can avoid them in all languages except Haskell. For example, Prolog can be “pure” and to do I/O without monads, also Clean can as well as F#. Monads have pros, sure, but they are not composable and workaround leads to another workaround – transformers. I’m not unique in my opinion: https://www.youtube.com/watch?v=rvRD_LRaiRs All of this looks like overengineering due to mentioned limitation. No such one in ML, F#. D has keyword “pure”, and didn’t introduce monads. Performance is very important feature of the language, that limitation is the reason #1 why Haskell has bad and unpredictable performance. “do”-block is not the same as “flat” block of C# statements and its performance is not the same. I can achieve Maybe effect with nullable+exceptions or ?-family operators, List with permutations/LINQ, guard with if+break/continue and to do it without sacrificing performance.. ListT/conduits – are just generators/enumerators. Benefit of monads IMHO is small, they are workaround of Haskell problem and are not needed in other languages. Sure, there are monads in Ocaml, Javascript, Python (as experimental libraries), but the reason is hype. Nobody will remember them after 5-10 years... Actually this is very-very subjective IMHHHHO 😊 Lenses and generic lenses help, so be it. But I don't think that anything prevents Haskell from having it, and I don't think that Haskell as a language needs a dramatic change as you depict to make it happen. Just a feature. When I have legacy code, there are a lot of types which fields are not starting with “_” prefix, so I need to create lenses explicitly... “Infrastructure” code. What is the business value of such code: nothing. For non-Haskell programmer it looks like you try to solve non-existing problem 😊 (very-very provocative point: all Haskell solutions looks very overengineering. The reason is: lambda-abstraction-only. When you try to build something big from little pieces then the process will be very overengineering. Imagine that the pyramids are built of small bricks). I don't agree that operators are noise. You certainly can write Haskell almost without operators if you wish. Here I’m agree with D. Knuth ideas of literature programming: if code can not be easy read and understand on the hard-copy then used language is not fine. Haskell code needs help from IDE, types hints, etc. And I often meet a case when somebody does not understand what monads are in “do” blocks. Also there are a lot of operators in different libraries and no way to know what some operator means (different libraries, even different versions have own set of operators). As for extensions, I think that many more should be just switched on by default. +1 You mean that conversion should happen implicitly? Thank you, but no, thank you. This is a source of problems in many languages, and it is such a great thing that Haskell doesn't coerce types implicitly. No... Actually, I have not idea what is better. Currently there are a lot of conversions. Some libraries functions expect String, another - Text, also ByteString, lazy/strict, the same with the numbers (word/int/integer). So, conversions happen often. I don't understand this "no business value" statement. Value for which business? What does it mean "check types, no business value"? There are libraries which nothing do in run-time. Only types playing. Only abstractions over types. And somebody says: oh man, see how many libraries has Haskell. But you can compare libraries of Haskell, Java, C#, Javascript, Perl, Python 😊 All libraries of Java, Python... have business value. Real-world functionality. Not abstract play with types. But more important point is a case with installed Agda 😊 or alternative libraries which does the same/similar things. The reason is important: Haskell moves a lot of functionality to libraries which is not good design IMHO. This is the root of the problem. Better is to have one good solid library bundled with GHC itself (“batteries included”) and only specific things will live in libraries and frameworks. Monads and monads transformers are central thing in Haskell. They a located in libraries. There is standard parser combinators in GHC itself, but you will have in the same project another one (or more than 1!). Etc, etc... Also installed GHC... Why is it so big!? IMHO it’s time to clear ecosystem, to reduce it to “batteries” 😊 And then it falls into a famous joke: "The problem with Open Source Software is YOU because YOU are not contributing" :) Meaning that if we want more good libs then we should write more good libs :) Absolutely true 😊 On Sat, Jul 14, 2018 at 5:05 PM Paul
wrote: I understand that my points are disputable, sure, example, multi-pardigm Oz – dead 😊 Any rule has exceptions. But my point was that people don’t like elegant and one-abstraction languages. It’s my observation. For me, Smalltalk was good language (mostly dead, except Pharo, which looks cool). Forth – high-level “stack-around-assembler”, mostly dead (Factor looks abandoned, only 8th looks super cool, but it’s not free). What else? Lisp? OK, there are SBCL, Clojure, Racket... But you don’t find even Clojure in languages trends usually. APL, J – super cool! Seems dead (I don’t know what happens with K). ML, SML? By the way, Haskell role was to kill SML community, sure it is sad to acknowledge it, but it’s 100% true... Haskell try to be minimalistic and IMHO this can lead to death. Joachim, I’m not talking “it’s good/it’s bad”, “multiparadigm is good” or else... I don’t know what is right. It’s my observations only. Looks like it can happen. If we will look to Haskell history then we see strange curve. I’ll try to describe it with humour, so, please, don;t take it seriously 😊 • Let’s be pure lambda fanatics! • Is it possible to create a big application? • Is it possible to compile and optimize it?! • Let’s try... • Wow, it’s possible!!! (sure, it’s possible, Lisp did it long-long ago). • Looks like puzzle, can be used to write a lot of articles (there were articles about combinators, Jay/Cat/Scheme, etc, now there are a lot of Haskell articles – big interesting in academia. But IMHO academia interest to language can kill it too: Clean, Strongtalk, etc) • Stop! How to do I/O? Real programming?!! • Ohh, if we will wrap it in lambda and defer it to top level (main::IO ()), it will have I/O type (wrapper is hidden in type) • Let’s call it... Monad!! • Wow, cool! Works! Anybody should use monads! Does not your language have monads? Then we fly to you! (everybody forgot that monads are workaround of Haskell limitation and are not needed in another languages. Also they lead to low-performance code) • But how to compose them???!?! • We will wrap/unwrap, wrap/unwrap.. Let’s call it... transformers!!! “Monad transformers” – sounds super cool. Your language does not have “lift” operation, right? Ugh... • How to access records fields... How... That’s a question. ‘.’ - no! ‘#’ - no! Eureka! We will add several language extensions and voila! • To be continued... 😊 I love Haskell but I think such curve is absolutely impossible in commercial language. With IT managers 😊 To solve problem in a way when solution leads to another problem which needs new solution again and reason is only to keep lambda-abstraction-only (OK, Vanessa, backpacks also 😉) Can you imagine that all cars will have red color? Or any food will be sweet? It’s not technical question, but psychological and linguistic. Why native languages are not so limited? They even borrow words and forms from another one 😊 Haskell’s core team knows how better then me, and I respect a lot of Haskell users, most of them helped me A LOT (!!!). It’s not opinion even, because I don’t know what is a right way. Let’s call it observation and feeling of the future. I feel: Haskell has 3 cases: 1) to die 2) to change itself 3) to fork to another language How I see commercial successful Haskell-like language: • No monads, no transformers • There are dependent types, linear types • There are other evaluation models/abstractions (not only lambda) • Special syntax for records fields, etc • Less operators noise, language extensions (but it’s very disputable) • Solve problems with numerous from/to conversions (strings, etc) • Solve problems with libraries Last point needs explanation: • There is a lot of libraries written to check some type concepts only, no any business value. Also there are a lot of libraries written by students while they are learning Haskell: mostly without any business value/abandoned • There is situation when you have alternative libraries in one project due to dependencies (but should be one only, not both!) • Strange dependencies: I have installed Agda even! Why???! IMHO problems with libraries and lambda-only-abstraction lead to super slow compilation, big and complex compiler. So, currently I see (again, it’s my observation only) 2 big “camps”: 1. Academia, which has own interests, for example, to keep Haskell minimalistic (one-only-abstraction). Trade-off only was to add language extensions but they fragmentizes the language 2. Practical programmers, which interests are different from 1st “camp” Another my observation is: a lot of peoples tried Haskell and switched to another languages (C#, F#, etc) because they cannot use it for big enterprise projects (Haskell becomes hobby for small experiments or is dropped off). Joachim, I’m absolutely agreed that a big company can solve a lot of these problems. But some of them have already own languages (you can compare measure units in Haskell and in F#, what looks better...). When I said about killer app, I mean: devs like Ruby not due to syntax but RoR. The same Python: sure, Python syntax is very good, but without Zope, Django, TurboGears, SQLAlchemy, Twisted, Tornado, Cheetah, Jinja, etc – nobody will use Python. Sure, there are exceptions: Delphi, CBuilder, for example. But this is bad karma of Borland 😊 They had a lot of compilers (pascal, prolog, c/c++, etc), but... On the other hand after reincarnation we have C# 😊 Actually all these are only observations: nobody knows the future. /Best regards, Paul From: Joachim Durchholz Sent: 13 июля 2018 г. 21:49 To: haskell-cafe@haskell.org Subject: Re: [Haskell-cafe] Investing in languages (Was: What is yourfavourite Haskell "aha" moment?) Am 13.07.2018 um 09:38 schrieb PY: 1. Haskell limits itself to lambda-only. Example, instead to add other abstractions and to become modern MULTI-paradigm languages, "modern"? That's not an interesting property. "maintainable", "expressive" - THESE are interesting. Multi-paradigm can help, but if overdone can hinder it - the earliest multi-paradigm language I'm aware of was PL/I, and that was a royal mess I hear. So, point #1 is limitation in abstraction: monads, transformers, anything - is function. It's not good. Actually limiting yourself to a single abstraciton tool can be good. This simplifies semantics and makes it easier to build stuff on top of it. Not that I'm saying that this is necessarily the best thing. There were such languages already: Forth, Joy/Cat, APL/J/K... Most of them look dead. Which proves nothing, because many multi-paradigm languages look dead, too. When you try to be elegant, your product (language) died. Proven by Lisp... er, disproven. This is not my opinion, this is only my observation. People like diversity and variety: in food, in programming languages, in relations, anywhere :) Not in programming languages. Actually multi-paradigm is usually a bad idea. It needs to be done in an excellent fashion to create something even remotely usable, while a single-paradigm language is much easier to do well. And in practice, bad language design has much nastier consequences than leaving out some desirable feature. 2. When language has killer app and killer framework, IMHO it has more chances. But if it has _killer ideas_ only... So, those ideas will be re-implemented in other languages and frameworks but with more simple and typical syntax :) "Typical" is in the eye of the beholder, so that's another non-argument. It's difficult to compete with product, framework, big library, but it's easy to compete with ideas. It's an observation too :-) Sure, but Haskell has product, framework, big library. What's missing is commitment by a big company, that's all. Imagine Google adopting Haskell, committing to building libraries and looking for Haskell programmers in the streets - all of a sudden, Haskell is going to be the talk of the day. (Replace "Google" with whatever big-name company with deep pockets: Facebook, MS, IBM, you name it.) language itself is not argument for me. You are arguing an awful lot about missing language features ("multi-paradigm") to credibly make that statement. Argument for me (I am usual developer) are killer apps/frameworks/libraries/ecosystem/etc. Currently Haskell has stack only - it's very good, but most languages has similar tools (not all have LTS analogue, but big frameworks are the same). Yeah, a good library ecosystem is very important, and from the reports I see on this list it's not really good enough. The other issue is that Haskell's extensions make it more difficult to have library code interoperate. Though that's a trade-off: The freedom to add language features vs. full interoperability. Java opted for the other opposite: 100% code interoperability at the cost of a really annoying language evolution process, and that gave it a huge library ecosystem. But... I'm not going to make the Haskell developers' decisions. If they don't feel comfortable with reversing the whole culture and make interoperability trump everything else, then I'm not going to blame
➢ Eta does. Through a very nice FFI. But so does Haskell. We have nice FFI to use C libs. I maintain a couple of libs that use it extensively, works quite well.
I asked because never tried Eta. So, if you are right, seems no reasons to develop Eta...
➢ Can I have a definition and laws of "monad++"? Otherwise, I don't understand what you are talking about. If it obeys monadic laws it is a monad. But I'll wait for the definition.
No better definition then original: https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/computatio... You see, they are different.
➢ But it is not lazy - one. Remember, laziness is our requirement here. Whatever you propose _must _ work in a context of laziness.
Does it mean because Haskell is lazy (Clean – not) then linear types are impossible in Haskell? If they are possible why we need monads?
➢ Second, the inability to track side effects in F# is not "simplification" and is not a benefit, but rather a limitation and a disadvantage of the language and its type system.
Why?
Haskell “tracks” effects obviously. But I shown example with State monad already. As I saw, nobody understand that State monad does not solve problem of spaghetti-code style manipulation with global state. Even more, it masks problem. But it was solved in OOP when all changes of state happen in one place under FSM control (with explicit rules of denied transitions: instead of change you have a request to change/a message, which can fail if transition is denied). Haskell HAS mutable structures, side-effects and allows spaghetti-code. But magical word “monad” allows to forget about problem and the real solution and to lie that no such problem at whole (it automatically solved due to magical safety of Haskell). Sure, you can do it in Haskell too, but Haskell does not force you, but Smalltalk, for example, forces you.
We often repeat this: “side-effects”, “tracks”, “safe”. But what does it actually mean? Can I have side-effects in Haskell? Yes. Can I mix side-effects? Yes. But in more difficult way than in ML or F#, for example. What is the benefit? Actually no any benefit, it’s easy understandable with simple experiment: if I have a big D program and I remove all “pure” keywords, will it become automatically buggy? No. If I stop to use “pure” totally, will it become buggy? No. If I add “print” for debug purpose in some subroutines, will they become buggy? No. If I mix read/write effects in my subroutine, will it make it buggy? No.
IMHO there is some substitution of concepts. Monads roots are not safety, but workaround to have effects in pure lambdas. And after monads introduction, a thesis was appeared: “monads make code more safe”. Motivation of monads was not to track effect (because it’s allegedly more safe), but to inject/allow/introduce effects in language like Haskell. Good example is State monad, again. State monad is needed to support manipulation of state (through argument replacement in chain of lambdas) but it’s totally other thing in comparison with idea to separate state manipulation in one isolated place under control of some FSM with explicit rules of allowed and denied transitions. If I am switching from read monad to RWST monad, will my code be more buggy? Again, no. Monads don’t decrease buggy or global-state-manipulation problems automatically, they are workaround for specific Haskell problem. Other languages don’t need monads. I CAN have monads in Python, Javascript but I don’t need them. My point is: monads are not valuable byself: Ocaml has not any monads and all is fine 😊
But it’s really very philosophical question, I think that monads are over-hyped actually. I stopped seeing the value of monads by themselves.
➢ Third, AFAIK CLR restrictions do not allow implementing things like Functor, Monad, etc. in F# directly because they can't support HKT. So they workaround the problem.
https://fsprojects.github.io/FSharpPlus/abstractions.html (btw, you can see that monad is monoid here 😉)
➢ But again, F# cannot be expressive enough: no HKT, no ability to express constraints, no ability to track effects...
If F# has monads (you call “monads” to computational expressions), then it CAN...
About HKT: yes, that’s true. But may be, it’s not so big problem? May be you can write effective, good and expressive code without them? Otherwise, we should agree that all languages without HKT are not expressive...
➢ Really? You keep mentioning F#, and I struggle with it right now _because_ of such limitations. There are no meaningful ways abstract over generics, it is impossible to reason about functions' behaviours from their type signatures (because side effects can happen anywhere), it has Option, but you still can get Null, you can't have constraints, etc., etc. It is sooooo muuuuch mooore limited.
IMHO fear of “side effects can happen anywhere” becomes traditional thesis. And what is the problem if I add “print” in some function?! 😊 Again, substitution of concepts, of monad’s motivations. Haskell compiler can not transform code with side-effects in effective way, and I must isolate all side-effects, mark such functions, but this is the problem of Haskell compiler, not mine. Why programmer should help compiler?! You can look at MLTon compiler, or OCaml one. IMHO they are good and work fine with side-effects; programs WITH side-effects under RWST or State or without those monads in ML are equal: if I port 101 ML functions with side-effects to Haskell, then I will add monad and will have 101 functions with the same side-effects, but now under monad. I can propagate my monad anywhere 😊 All my functions can be under RWST or State. Again, this problem should be solved in other way, not by wrapping the same actions in some wrapper-type (monad). A funny consequence of this, now my program become deep nested lambdas. Compiler should try to “flat” some of them, but I am not competent how Haskell good in it. Anyway, performance will be worse than in ML version. But focus here is a mark-with-monad-type, not the avoid side-effects. And it’s needed for compiler, not for me. May be I’m not clean, may be it is heavy to accept to the people hypnotized by magic invocations about monads power, may be I don’t understand something 😊
➢ No, you can't.
Something like this: user?.Phone?.Company?.Name??"missing"; ?
(do
u <- mbUser
ph <- phoneOf u
...) <|> pure “missing”
➢ OCaml exists for 22 years now, doing well and solves problems it has been designed for very well. So _already_ more than twice compare to your prediction.
It’s Ocaml. It follows to the golden middle, to avoid danger corner 😉
➢ fields are not starting with “_” prefix, so I need to create lenses explicitly
➢ No you don't. You don't have to have "_" prefix to generate a lense. You have total control here.
Hmm, may be I’m not right. I’m using microlenses and call `makeLensesFor`...
➢ Can you define "business value" please? You mention it for a couple of times, so I am puzzled. Otherwise, it reminds me of https://twitter.com/newhoggy/status/999930802589724672
➢ For Haskell programmers, Java solves non-existing problems all the time :) Every single time you see on twitter or here something like "I replaced hundreds of lines of Java code with a single 'traverse'" you get the proof. And it happens so often.
It involves a long talk 😉 Business value - I’ll illustrate it, imagine a code:
.... lift .... – NO business value in lift! It’s infrastructure code
m <- loadModel “mymodel.bin” – there is business value
checkModel m rules – there is business value too
So, I can mark infrastructure code with red color, business code with green and to calculate ratio. And to talk about “usefulness/effectivity” of language. How many infrastructure noise have the language (Java, btw, IMHO will have bad ratio too). I have a tons of types, JSON to/from instances, - I repeat models which are coded in external 3rd part services. But F# team thinks like me: it’s not enterprise way to do things in such manner and they introduced types providers – it’s only small example. In Haskell I wrote a lot of infrastructure code, different instances, etc, etc. But other languages are more concentrated on business value, on domain, on business logic. I though about DSLs, but DSLs can be antipattern and to lead to other problems...
➢ Haskell code needs help from IDE, types hints, etc.
➢ Types are written and read by programmers. Java is impossible without IDE. What is the point here?
Usually it’s difficult to understand for programmers. Most say: Perl looks like sh*t. Just look at these %, $, etc. And they don’t understand simple thesis: language is about humans, about linguistic, not about computation. Language should not be oriented to compiler or to its computational model, how will you like to work with bytes and words only in C++? So, we have “a”, “the” in English, we have “%”, “$” in Perl. And I don’t know exact object “type” but I can imagine its nature, it’s scalar, vector, etc. In Haskell I can skip most signatures and such code is not readable, I need Intero help to check some types. It’s very bad situation. It’s not 100% true for Java because you will have signatures in Java, you can not skip them, right? And if I add operators noise also (when I have not idea what does this ASCII-art do), the code becomes IDE-centric.
➢ Better for whom? Definitely NOT better for me and my team using Haskell commercially. Again, to effectively meet requirements, functional and non-functional, we don't want just a mediocre compromise thing. I gave you an example with parsers already: different parsers have different tradeoffs. It is often a GOOD thing that there are many different libraries doing the same thing differently.
Hm, if I have several libraries which are doing similar things (only due to dependencies), then I have: 1) big Haskell installation (1Gb?) 2) slow compilation 3) big binaries, etc. I understand, you have freedom of choice. But let’s look to IT: C++ turned to one library (imported some Boost solutions, etc, etc), the same R7RS, D with its Phobos, Ocaml has batteries from Jane str., Python 😊 IMHO its general trend. Let’s imagine: project with 2, 3 parsers libraries, conduit and pipes, etc, due to dependencies. So, IMHO my point is not so strange or weird 😉 I’m talking about drop off those libraries (parsers, etc), but about creating of one solid library which components will depends only on it. Other alternatives will be somewhere in repos, who want, can use them without any problems. Something like Qt, Boost, Gtk, etc.
Let me be more precise, I’m comfort with Haskell at whole, but 1) I discussed Haskell with other people 2) I read opinion of other people in industry 2) I’m programmer since 97 and I have critical kind of the mind, so all of these allows me also to look from another POV. And I have been see how it’s similar to the fate of other languages which had good elegant ideas, they followed to one concept, abstraction only. This is the way to be marginalized, what happens with a lot of them. Actually, Haskell is already marginal: you can check how many programmers use it in the world, in most statistics it will not exist even. OK, I’m geek, in real life too, but IT industry is not a geek 😊
/Best regards
On Sun, Jul 15, 2018 at 1:28 AM Paul

If you get large D program and remove all "pure" annotations from it, it
will not become buggy _immediatelly_. But its chance to become buggy after
a few changes will increase dramatically.
Constraints and forced purity in Haskell are tools allowing one to design
safe programs, to let compiler catch your (or junior intern's) hand before
bug is introduced. And I agree with Alexey, lack of ability to express
"never ever this branch of code should change anything outside" is a big
problem in majority of type systems.
On Sun, Jul 15, 2018, 12:07 Paul
- Eta does. Through a very nice FFI. But so does Haskell. We have nice FFI to use C libs. I maintain a couple of libs that use it extensively, works quite well.
I asked because never tried Eta. So, if you are right, seems no reasons to develop Eta...
- Can I have a definition and laws of "monad++"? Otherwise, I don't understand what you are talking about. If it obeys monadic laws it is a monad. But I'll wait for the definition.
No better definition then original: https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/computatio... You see, they are different.
- But it is not lazy - one. Remember, laziness is our requirement here. Whatever you propose _must _ work in a context of laziness.
Does it mean because Haskell is lazy (Clean – not) then linear types are impossible in Haskell? If they are possible why we need monads?
- Second, the inability to track side effects in F# is not "simplification" and is not a benefit, but rather a limitation and a disadvantage of the language and its type system.
Why?
Haskell “tracks” effects obviously. But I shown example with State monad already. As I saw, nobody understand that State monad does not solve problem of spaghetti-code style manipulation with global state. Even more, it masks problem. But it was solved in OOP when all changes of state happen in *one place* under FSM control (with explicit rules of denied transitions: instead of change you have *a request to change*/a message, which can fail if transition is denied). Haskell HAS mutable structures, side-effects and allows spaghetti-code. But magical word “monad” allows to forget about problem and the real solution and to lie that no such problem at whole (it automatically solved due to magical safety of Haskell). Sure, you can do it in Haskell too, but Haskell does not force you, but Smalltalk, for example, forces you.
We often repeat this: “side-effects”, “tracks”, “safe”. But what does it actually mean? Can I have side-effects in Haskell? Yes. Can I mix side-effects? Yes. But in more difficult way than in ML or F#, for example. What is the benefit? Actually no any benefit, it’s easy understandable with simple experiment: if I have a big D program and I remove all “pure” keywords, will it become automatically buggy? No. If I stop to use “pure” totally, will it become buggy? No. If I add “print” for debug purpose in some subroutines, will they become buggy? No. If I mix read/write effects in my subroutine, will it make it buggy? No.
IMHO there is some substitution of concepts. Monads roots are not safety, but workaround to have effects in pure lambdas. And *after* monads introduction, a thesis was appeared: “monads make code more safe”. Motivation of monads was not to track effect (because it’s allegedly more safe), but to inject/allow/introduce effects in language like Haskell. Good example is State monad, again. State monad is needed to support manipulation of state (through argument replacement in chain of lambdas) but it’s totally other thing in comparison with idea to separate state manipulation in one isolated place under control of some FSM with explicit rules of allowed and denied transitions. If I am switching from read monad to RWST monad, will my code be more buggy? Again, no. Monads don’t decrease buggy or global-state-manipulation problems automatically, they are workaround for specific Haskell problem. Other languages don’t need monads. I CAN have monads in Python, Javascript but I don’t need them. My point is: monads are not valuable byself: Ocaml has not any monads and all is fine 😊
But it’s really very philosophical question, I think that monads are over-hyped actually. I stopped seeing the value of monads by themselves.
- Third, AFAIK CLR restrictions do not allow implementing things like Functor, Monad, etc. in F# directly because they can't support HKT. So they workaround the problem.
https://fsprojects.github.io/FSharpPlus/abstractions.html (btw, you can see that monad is monoid here 😉)
- But again, F# cannot be expressive enough: no HKT, no ability to express constraints, no ability to track effects...
If F# has monads (you call “monads” to computational expressions), then it CAN...
About HKT: yes, that’s true. But may be, it’s not so big problem? May be you can write effective, good and expressive code without them? Otherwise, we should agree that all languages without HKT are not expressive...
- Really? You keep mentioning F#, and I struggle with it right now _because_ of such limitations. There are no meaningful ways abstract over generics, it is impossible to reason about functions' behaviours from their type signatures (because side effects can happen anywhere), it has Option, but you still can get Null, you can't have constraints, etc., etc. It is sooooo muuuuch mooore limited.
IMHO fear of “side effects can happen anywhere” becomes traditional thesis. And what is the problem if I add “print” in some function?! 😊 Again, substitution of concepts, of monad’s motivations. Haskell compiler can not transform code with side-effects in effective way, and I must isolate all side-effects, mark such functions, but this is the problem of Haskell compiler, not mine. Why programmer should help compiler?! You can look at MLTon compiler, or OCaml one. IMHO they are good and work fine with side-effects; programs WITH side-effects under RWST or State or without those monads in ML are equal: if I port 101 ML functions with side-effects to Haskell, then I will add monad and will have 101 functions with the same side-effects, but now under monad. I can propagate my monad anywhere 😊 All my functions can be under RWST or State. Again, this problem should be solved in other way, not by wrapping the same actions in some wrapper-type (monad). A funny consequence of this, now my program become deep nested lambdas. Compiler should try to “flat” some of them, but I am not competent how Haskell good in it. Anyway, performance will be worse than in ML version. But focus here is a mark-with-monad-type, not the avoid side-effects. And it’s needed for compiler, not for me. May be I’m not clean, may be it is heavy to accept to the people hypnotized by magic invocations about monads power, may be I don’t understand something 😊
- No, you can't.
Something like this: user?.Phone?.Company?.Name??"missing"; ?
(do
u <- mbUser
ph <- phoneOf u
...) <|> pure “missing”
- OCaml exists for 22 years now, doing well and solves problems it has been designed for very well. So _already_ more than twice compare to your prediction.
It’s Ocaml. It follows to the golden middle, to avoid danger corner 😉
- fields are not starting with “_” prefix, so I need to create lenses explicitly
- No you don't. You don't have to have "_" prefix to generate a lense. You have total control here.
Hmm, may be I’m not right. I’m using microlenses and call `makeLensesFor`...
- Can you define "business value" please? You mention it for a couple of times, so I am puzzled. Otherwise, it reminds me of https://twitter.com/newhoggy/status/999930802589724672
- For Haskell programmers, Java solves non-existing problems all the time :) Every single time you see on twitter or here something like "I replaced hundreds of lines of Java code with a single 'traverse'" you get the proof. And it happens so often.
It involves a long talk 😉 Business value - I’ll illustrate it, imagine a code:
.... lift .... – *NO business value in lift! It’s infrastructure code*
m <- loadModel “mymodel.bin” – *there is business value*
checkModel m rules – *there is business value too*
So, I can mark infrastructure code with red color, business code with green and to calculate ratio. And to talk about “usefulness/effectivity” of language. How many infrastructure noise have the language (Java, btw, IMHO will have bad ratio too). I have a tons of types, JSON to/from instances, - I repeat models which are coded in external 3rd part services. But F# team thinks like me: it’s not enterprise way to do things in such manner and they introduced types providers – it’s only small example. In Haskell I wrote a lot of infrastructure code, different instances, etc, etc. But other languages are more concentrated on business value, on domain, on business logic. I though about DSLs, but DSLs can be antipattern and to lead to other problems...
- Haskell code needs help from IDE, types hints, etc. - Types are written and read by programmers. Java is impossible without IDE. What is the point here?
Usually it’s difficult to understand for programmers. Most say: Perl looks like sh*t. Just look at these %, $, etc. And they don’t understand simple thesis: language is about humans, about linguistic, not about computation. Language should not be oriented to compiler or to its computational model, how will you like to work with bytes and words only in C++? So, we have “a”, “the” in English, we have “%”, “$” in Perl. And I don’t know exact object “type” but I can imagine its nature, it’s scalar, vector, etc. In Haskell I can skip most signatures and such code is not readable, I need Intero help to check some types. It’s very bad situation. It’s not 100% true for Java because you will have signatures in Java, you can not skip them, right? And if I add operators noise also (when I have not idea what does this ASCII-art do), the code becomes IDE-centric.
- Better for whom? Definitely NOT better for me and my team using Haskell commercially. Again, to effectively meet requirements, functional and non-functional, we don't want just a mediocre compromise thing. I gave you an example with parsers already: different parsers have different tradeoffs. It is often a GOOD thing that there are many different libraries doing the same thing differently.
Hm, if I have several libraries which are doing similar things (only due to dependencies), then I have: 1) big Haskell installation (1Gb?) 2) slow compilation 3) big binaries, etc. I understand, you have freedom of choice. But let’s look to IT: C++ turned to one library (imported some Boost solutions, etc, etc), the same R7RS, D with its Phobos, Ocaml has batteries from Jane str., Python 😊 IMHO its general trend. Let’s imagine: project with 2, 3 parsers libraries, conduit and pipes, etc, due to dependencies. So, IMHO my point is not so strange or weird 😉 I’m talking about drop off those libraries (parsers, etc), but about creating of one solid library which components will depends only on it. Other alternatives will be somewhere in repos, who want, can use them without any problems. Something like Qt, Boost, Gtk, etc.
Let me be more precise, I’m comfort with Haskell at whole, but 1) I discussed Haskell with other people 2) I read opinion of other people in industry 2) I’m programmer since 97 and I have critical kind of the mind, so all of these allows me also to look from another POV. And I have been see how it’s similar to the fate of other languages which had good elegant ideas, they followed to one concept, abstraction only. This is the way to be marginalized, what happens with a lot of them. Actually, Haskell is already marginal: you can check how many programmers use it in the world, in most statistics it will not exist even. OK, I’m geek, in real life too, but IT industry is not a geek 😊
/Best regards
On Sun, Jul 15, 2018 at 1:28 AM Paul
wrote: Hello Alex!
A small disclaimer: none of the members of our team has an academic background. We all have different backgrounds: C#, Java, Ruby, Python, C, even Perl if I am not mistaken. Yet we ended up with FP first, and then with Haskell.
We have switched to Haskell from Scala, which _is_ a multi-paradigm language borrowing bits and pieces from other languages/paradigms and mixing them together. It is an enormously hard work to do it and for that, I very much respect
Oh, my 1st question will be: did you try Eta, Frege? May be I’m wrong but Eta should support Haskell libraries as well as Java ones? They allow you to use libraries from the both world...
As a result, the language becomes overly complicated and less useful.
Yes, this is another side. You know, anything has several sides: good and bad...
Your joke about how Haskell has been made misses one point: it was initially designed as a lazy language (at least as far as I know). Many features that Haskell has now are there because of laziness: if you want to be lazy, then you have to be pure, you have to sequence your effects, etc.
True. Laziness makes Haskell unique. I think Haskell makes laziness so popular in modern languages although it was known long ago (as data in “infinite streams”, etc). I think, Miranda was lazy, so Haskell is lazy too 😊 And IMHO there was some lazy dialect of ML (may be, I’m not right).
"Let's defer lambda, name it IO and let's call it Monad" - this bit isn't even funny. Monad isn't IO. IO happens to be a monad (as many things do, List as an example), but monad isn't IO and has nothing to do with IO. A horse is classified as Mammal, but Mammal doesn't mean horse _at all_.
Sure. I mean, the need of side-effects (and firstly I/O) led to the monads.
In a context of a lazy language, you need to sequence your effects (including side effects), that's the first point. The second is that instead of disappearing from Haskell, monads (and other concepts) are making their way to other languages. Scala has them, F# has them, even C# has them (however indirectly). Try to take away List Monad from C# developers and they'll kill you ;)
Better IMHO to have less infrastructure code. Better is to hide all “machinery” in compiler.
My point was that monads are workaround of Haskell problem, this was historically reason of their appearance. And if I have not such limitation in my language I don’t need any monads. What are the monad benefits in ML, for example? They are using in F#, but 1) comp. expressions are not monads but step forward, “monads++” and 2) they play different role in F#: simplifying of the code. And you *can* avoid them in all languages except Haskell. For example, Prolog can be “pure” and to do I/O without monads, also Clean can as well as F#. Monads have pros, sure, but they are not composable and workaround leads to another workaround – transformers. I’m not unique in my opinion: https://www.youtube.com/watch?v=rvRD_LRaiRs All of this looks like overengineering due to mentioned limitation. No such one in ML, F#. D has keyword “pure”, and didn’t introduce monads. Performance is very important feature of the language, that limitation is the reason #1 why Haskell has bad and unpredictable performance. “do”-block is not the same as “flat” block of C# statements and its performance is not the same. I can achieve Maybe effect with nullable+exceptions or ?-family operators, List with permutations/LINQ, guard with if+break/continue and to do it without sacrificing performance.. ListT/conduits – are just generators/enumerators. Benefit of monads IMHO is small, they are workaround of Haskell problem and are not needed in other languages. Sure, there are monads in Ocaml, Javascript, Python (as experimental libraries), but the reason is hype. Nobody will remember them after 5-10 years...
Actually this is very-very subjective IMHHHHO 😊
Lenses and generic lenses help, so be it. But I don't think that anything prevents Haskell from having it, and I don't think that Haskell as a language needs a dramatic change as you depict to make it happen. Just a feature.
When I have legacy code, there are a lot of types which fields are not starting with “_” prefix, so I need to create lenses explicitly... “Infrastructure” code. What is the business value of such code: nothing. For non-Haskell programmer it looks like you try to solve non-existing problem 😊 (*very-very provocative point: all Haskell solutions looks very overengineering. The reason is: lambda-abstraction-only. When you try to build something big from little pieces then the process will be very overengineering. Imagine that the pyramids are built of small bricks*).
I don't agree that operators are noise. You certainly can write Haskell almost without operators if you wish.
Here I’m agree with D. Knuth ideas of literature programming: if code can not be easy read and understand on the hard-copy then used language is not fine. Haskell code needs help from IDE, types hints, etc. And I often meet a case when somebody does not understand what monads are in “do” blocks. Also there are a lot of operators in different libraries and no way to know what some operator means (different libraries, even different versions have own set of operators).
As for extensions, I think that many more should be just switched on by default.
+1
You mean that conversion should happen implicitly? Thank you, but no, thank you. This is a source of problems in many languages, and it is such a great thing that Haskell doesn't coerce types implicitly.
No... Actually, I have not idea what is better. Currently there are a lot of conversions. Some libraries functions expect String, another - Text, also ByteString, lazy/strict, the same with the numbers (word/int/integer). So, conversions happen often.
I don't understand this "no business value" statement. Value for which business? What does it mean "check types, no business value"?
There are libraries which nothing do in run-time. Only types playing. Only abstractions over types. And somebody says: oh man, see how many libraries has Haskell. But you can compare libraries of Haskell, Java, C#, Javascript, Perl, Python 😊 All libraries of Java, Python... have business value. Real-world functionality. Not abstract play with types. But more important point is a case with installed Agda 😊 or alternative libraries which does the same/similar things. The reason is important: Haskell moves a lot of functionality to libraries which is not good design IMHO. This is the root of the problem. Better is to have one good solid library bundled with GHC itself (“batteries included”) and only specific things will live in libraries and frameworks. Monads and monads transformers are central thing in Haskell. They a located in libraries. There is standard parser combinators in GHC itself, but you will have in the same project another one (or more than 1!). Etc, etc...
Also installed GHC... Why is it so big!? IMHO it’s time to clear ecosystem, to reduce it to “batteries” 😊
And then it falls into a famous joke: "The problem with Open Source Software is YOU because YOU are not contributing" :) Meaning that if we want more good libs then we should write more good libs :)
Absolutely true 😊
On Sat, Jul 14, 2018 at 5:05 PM Paul
wrote: I understand that my points are disputable, sure, example, multi-pardigm Oz – dead 😊 Any rule has exceptions. But my point was that people don’t like elegant and one-abstraction languages. It’s my observation. For me, Smalltalk was good language (mostly dead, except Pharo, which looks cool). Forth – high-level “stack-around-assembler”, mostly dead (Factor looks abandoned, only 8th looks super cool, but it’s not free). What else? Lisp? OK, there are SBCL, Clojure, Racket... But you don’t find even Clojure in languages trends usually. APL, J – super cool! Seems dead (I don’t know what happens with K). ML, SML? By the way, Haskell role was to kill SML community, sure it is sad to acknowledge it, but it’s 100% true...
Haskell try to be minimalistic and IMHO this can lead to death. Joachim, I’m not talking “it’s good/it’s bad”, “multiparadigm is good” or else... I don’t know what is right. It’s my observations only. Looks like it can happen.
If we will look to Haskell history then we see strange curve. I’ll try to describe it with humour, so, please, don;t take it seriously 😊
· Let’s be pure lambda fanatics!
· Is it possible to create a big application?
· Is it possible to compile and optimize it?!
· Let’s try...
· Wow, it’s possible!!! (sure, it’s possible, Lisp did it long-long ago).
· Looks like puzzle, can be used to write a lot of articles (there were articles about combinators, Jay/Cat/Scheme, etc, now there are a lot of Haskell articles – big interesting in academia. But IMHO academia interest to language can kill it too: Clean, Strongtalk, etc)
· Stop! How to do I/O? Real programming?!!
· Ohh, if we will wrap it in lambda and defer it to top level (main::IO ()), it will have I/O type (wrapper is hidden in type)
· Let’s call it... Monad!!
· Wow, cool! Works! Anybody should use monads! Does not your language have monads? Then we fly to you! (everybody forgot that monads are workaround of Haskell limitation and are not needed in another languages. Also they lead to low-performance code)
· But how to compose them???!?!
· We will wrap/unwrap, wrap/unwrap.. Let’s call it... transformers!!! “Monad transformers” – sounds super cool. Your language does not have “lift” operation, right? Ugh...
· How to access records fields... How... That’s a question. ‘.’ - no! ‘#’ - no! Eureka! We will add several language extensions and voila!
· To be continued... 😊
I love Haskell but I think such curve is absolutely impossible in commercial language. With IT managers 😊 To solve problem in a way when solution leads to another problem which needs new solution again and reason is only to keep lambda-abstraction-only (OK, Vanessa, backpacks also 😉) Can you imagine that all cars will have red color? Or any food will be sweet? It’s not technical question, but psychological and linguistic. Why native languages are not so limited? They even borrow words and forms from another one 😊
Haskell’s core team knows how better then me, and I respect a lot of Haskell users, most of them *helped me A LOT* (!!!). It’s not opinion even, because I don’t know what is a right way. Let’s call it observation and feeling of the future.
I feel: Haskell has 3 cases: 1) to die 2) to change itself 3) to fork to another language
How I see commercial successful Haskell-like language:
· No monads, no transformers
· There are dependent types, linear types
· There are other evaluation models/abstractions (not only lambda)
· Special syntax for records fields, etc
· Less operators noise, language extensions (but it’s very disputable)
· Solve problems with numerous from/to conversions (strings, etc)
· Solve problems with libraries
Last point needs explanation:
· There is a lot of libraries written to check some type concepts only, no any business value. Also there are a lot of libraries written by students while they are learning Haskell: mostly without any business value/abandoned
· There is situation when you have alternative libraries in one project due to dependencies (but should be one only, not both!)
· Strange dependencies: I have installed Agda even! Why???!
IMHO problems with libraries and lambda-only-abstraction lead to super slow compilation, big and complex compiler.
So, currently I see (again, it’s my observation only) 2 big “camps”:
1. Academia, which has own interests, for example, to keep Haskell minimalistic (one-only-abstraction). Trade-off only was to add language extensions but they fragmentizes the language
2. Practical programmers, which interests are different from 1st “camp”
Another my observation is: a lot of peoples tried Haskell and switched to another languages (C#, F#, etc) because they cannot use it for big enterprise projects (Haskell becomes hobby for small experiments or is dropped off).
Joachim, I’m absolutely agreed that a big company can solve a lot of these problems. But some of them have already own languages (you can compare measure units in Haskell and in F#, what looks better...).
When I said about killer app, I mean: devs like Ruby not due to syntax but RoR. The same Python: sure, Python syntax is very good, but without Zope, Django, TurboGears, SQLAlchemy, Twisted, Tornado, Cheetah, Jinja, etc – nobody will use Python. Sure, there are exceptions: Delphi, CBuilder, for example. But this is bad karma of Borland 😊 They had a lot of compilers (pascal, prolog, c/c++, etc), but... On the other hand after reincarnation we have C# 😊 Actually all these are only observations: nobody knows the future.
/Best regards, Paul
*From: *Joachim Durchholz
*Sent: *13 июля 2018 г. 21:49 *To: *haskell-cafe@haskell.org *Subject: *Re: [Haskell-cafe] Investing in languages (Was: What is yourfavourite Haskell "aha" moment?) Am 13.07.2018 um 09:38 schrieb PY:
1. Haskell limits itself to lambda-only. Example, instead to add other
abstractions and to become modern MULTI-paradigm languages,
"modern"?
That's not an interesting property.
"maintainable", "expressive" - THESE are interesting. Multi-paradigm can
help, but if overdone can hinder it - the earliest multi-paradigm
language I'm aware of was PL/I, and that was a royal mess I hear.
So, point #1 is limitation in
abstraction: monads, transformers, anything - is function. It's not
good.
Actually limiting yourself to a single abstraciton tool can be good.
This simplifies semantics and makes it easier to build stuff on top of it.
Not that I'm saying that this is necessarily the best thing.
There were such languages already: Forth, Joy/Cat, APL/J/K... Most of
them look dead.
Which proves nothing, because many multi-paradigm languages look dead, too.
When you try to be elegant, your product (language) died.
Proven by Lisp... er, disproven.
This is not my opinion, this is only my observation. People like
diversity and variety: in food, in programming languages, in relations,
anywhere :)
Not in programming languages.
Actually multi-paradigm is usually a bad idea. It needs to be done in an
excellent fashion to create something even remotely usable, while a
single-paradigm language is much easier to do well.
And in practice, bad language design has much nastier consequences than
leaving out some desirable feature.
2. When language has killer app and killer framework, IMHO it has more
chances. But if it has _killer ideas_ only... So, those ideas will be
re-implemented in other languages and frameworks but with more simple
and typical syntax :)
"Typical" is in the eye of the beholder, so that's another non-argument.
It's difficult to compete with product,
framework, big library, but it's easy to compete with ideas. It's an
observation too :-)
Sure, but Haskell has product, framework, big library.
What's missing is commitment by a big company, that's all. Imagine
Google adopting Haskell, committing to building libraries and looking
for Haskell programmers in the streets - all of a sudden, Haskell is
going to be the talk of the day. (Replace "Google" with whatever
big-name company with deep pockets: Facebook, MS, IBM, you name it.)
language itself is not argument for me.
You are arguing an awful lot about missing language features
("multi-paradigm") to credibly make that statement.
Argument for me (I
am usual developer) are killer apps/frameworks/libraries/ecosystem/etc.
Currently Haskell has stack only - it's very good, but most languages
has similar tools (not all have LTS analogue, but big frameworks are the
same).
Yeah, a good library ecosystem is very important, and from the reports I
see on this list it's not really good enough.
The other issue is that Haskell's extensions make it more difficult to
have library code interoperate. Though that's a trade-off: The freedom
to add language features vs. full interoperability. Java opted for the
other opposite: 100% code interoperability at the cost of a really
annoying language evolution process, and that gave it a huge library
ecosystem.
But... I'm not going to make the Haskell developers' decisions. If they
don't feel comfortable with reversing the whole culture and make
interoperability trump everything else, then I'm not going to blame
them. I'm not even going to predict anything about Haskell's future,
because my glass orb is out for repairs and I cannot currently predict
the future.
Regards,
Jo
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 15.07.2018 um 18:06 schrieb Paul:
* But it is not lazy - one. Remember, laziness is our requirement here. Whatever you propose _must _ work in a context of laziness.
Does it mean because Haskell is lazy (Clean – not) then linear types are impossible in Haskell?
Laziness and linear types are orthogonal.
If they are possible why we need monads?
"Monadic" as a term is at the same level as "associative". A very simple concept that is visible everywhere, and if you can arrange your computations in a monadic manner you'll get a certain level of sanity. And lo and behold, you can even write useful libraries just based on the assumption that you're dealing with monadic structures, that's monad transformers (so they're more interesting than associativity). So monads are interesting and useful (read: important) regardless of whether you have laziness or linear types. Again: orthogonal.
Haskell “tracks” effects obviously. But I shown example with State monad already. As I saw, nobody understand that State monad does not solve problem of spaghetti-code style manipulation with global state.
Actually that's pretty well-known. Not just for State for for anything that hides state out of plain sight, i.e. somewhere else than in function parameters. I.e. either some struct type, or by returning a partially evaluated function that has that data. People get bitten by those things, and they learn to avoid these patterns except where it's safe - just as with spaghetti code, which people stopped writing a few years ago (nowadays it's more spaghetti data but at least that's analyzable). So I think if you don't see anybody explicitly mentioning spaghetti issues with State that's for some people it's just hiding in plain sight and they either aren't consciously aware of it, or find that area so self-explaining that they do not think they really need to explain that. Or you simply misunderstood what people are saying.
But it was solved in OOP when all changes of state happen in *one place* under FSM control Sorry, but that's not what OO is about. Also, I do not think that you're using general FSMs, else you'd be having transition spaghetti.
(with explicit rules of denied transitions: instead of change you have *a request to change*/a message, which can fail if transition is denied). Which does nothing about keeping transitions under control. Let me repeat: What you call a "message" is just a standard synchronous function call. The one difference is that the caller allows the target type to influence what function gets actually called, and while that's powerful it's quite far from what people assume if you throw that "message" terminology around. This conflation of terminology has been misleading people since the invention of Smalltalk. I wish people would finally stop using that terminology, and highlight those things where Smalltalk really deviates from other OO languages (#doesNotUnderstand, clean everything-is-an-object concepts, Metaclasses Done Right). This message send terminology is just a distraction.
Haskell HAS mutable structures, side-effects and allows spaghetti-code. Nope. Its functions can model these, and to the point that the Haskell code is still spaghetti. But that's not the point. The point is that Haskell makes it easy to write non-spaghetti.
BTW you have similar claims about FSMs. Ordinarily they are spaghetti incarnate, but you say they work quite beautifully if done right. (I'm staying sceptical because your arguments in that direction didn't make sense to me, but that might be because I'm lacking background information, and filling in these gaps is really too far off-topic to be of interest.)
But magical word “monad” allows to forget about problem and the real solution and to lie that no such problem at whole (it automatically solved due to magical safety of Haskell). Sure, you can do it in Haskell too, but Haskell does not force you, but Smalltalk, for example, forces you.
WTF? You can do spaghetti in Smalltalk. Easily actually, there are plenty of antipatterns for that language.
We often repeat this: “side-effects”, “tracks”, “safe”. But what does it actually mean? Can I have side-effects in Haskell? Yes. Can I mix side-effects? Yes. But in more difficult way than in ML or F#, for example. What is the benefit?
That it is difficult to accidentally introduce side effects. Or, rather, the problems of side effects. Formally, no Haskell program can have a side effect (unless using UnsafeIO or FFI, but that's not what we're talking about here).
Actually no any benefit, Really. You *should* listen more. If the overwhelming majority of Haskell programmers who're using it in practice tell you that there are benefits, you should question your analysis, not their experience. You should ask rather than make bold statements that run contrary to practical experience. That way, everybody is going to learn: You about your misjudgements, and (maybe) Haskell programmers about the limits of the approach. The way you're approaching this is just going to give you an antibody reaction: Everybody is homing in on you, with the sole intent of neutralizing you. (Been there, done that, on both sides of the fence.)
it’s easy
understandable with simple experiment: if I have a big D program and I remove all “pure” keywords, will it become automatically buggy? No. If I stop to use “pure” totally, will it become buggy? No.
Sure. It will still be pure.
If I add “print”
for debug purpose in some subroutines, will they become buggy? No.
Yes they will. Some tests will fail if they expect specific output. If the program has a text-based user interface, it will become unusable.
If I
mix read/write effects in my subroutine, will it make it buggy? No.
Yes they will become buggy. You'll get aliasing issues. And these are the nastiest thing to debug because they will hit you if and only if the program is so large that you don't know all the data flows anymore, and your assumptions about what might be an alias start to fall down. Or not you but maybe the new coworker who doesn't yet know all the parts of the program. That's exactly why data flow is being pushed to being explicit.
But it’s really very philosophical question, I think that monads are over-hyped actually. I stopped seeing the value of monads by themselves.
Yeah, a lot of people think that monads are somehow state. It's just that state usually is pretty monadic. Or, rather, the functions that are built for computing a "next state" are by nature monadic, so that was the first big application area of monads. But monads are really much more general than for handling state. It's like assuming that associativity is for arithmetic, but there's a whole lot of other associative operators in the world, some of them even useful (such as string concatenation).
* Third, AFAIK CLR restrictions do not allow implementing things like Functor, Monad, etc. in F# directly because they can't support HKT. So they workaround the problem.
https://fsprojects.github.io/FSharpPlus/abstractions.html(btw, you can see that monad is monoid here 😉)
Nope, monoid is a special case of monad (the case where all input and output types are the same). (BTW monoid is associativity + neutral element. Not 100% sure whether monad's "return" qualifies as a neutral element, and my monoid-equals-monotyped-monad claim above may fall down if it is not. Also, different definitions of monad may add different operators so the subconcept relationship may not be straightforward.) (I'm running out of time and interest so I'll leave the remaining points uncommented.) Regards, Jo

So I think if you don't see anybody explicitly mentioning spaghetti issues with State that's for some people it's just hiding in plain sight and they either aren't consciously aware of it, or find that area so self-explaining that they do not think they really need to explain that.
Sorry, but that's not what OO is about. Also, I do not think that you're using general FSMs, else you'd be having transition spaghetti. To be precise, then yes, you are right. But such model forces me more,
Let me repeat: What you call a "message" is just a standard synchronous function call. The one difference is that the caller allows the target type to influence what function gets actually called, and while that's powerful it's quite far from what people assume if you throw that "message" terminology around. I mentioned Erlang early: the same - you send message to FSM which will be lightweight process. Idea of agents and messages is the same in Smalltalk, in QNX, in Erlang, etc, etc... So, "message" does not always mean "synchronous call". For example, QNX "optimizes" local messages, so
IMHO State monad solution is orthogonal to my point. It does not force you to isolate state change in one place with explicit control, it only marks place where it happens. This info is needed to compiler, not to me. For me - no benefits. Benefit to me - to isolate changing, but with State I can (and all of us do it!) smear change points throughout the code. So, my question is: what exact problem does solve State monad? Which problem? Mine or compiler? Haskell pure lambda-only-abstraction limitation? OK, if we imagine another Haskell, similar to F#, will I need State monad yet? IMHO - no. My point is: State monad is super, in Haskell, and absolutely waste in other languages. I will isolate mutability in another manner: more safe, robust and controllable. Recap: 1. State monad allows you to mark change of THIS state, so you can easy find where THIS state is changing (tracking changes) 2. Singleton with FSM allows you to *control* change and to isolate all change logic in one place 1st allows spaghetti, 2nd - does not. 2nd force you to another model: not changes, but change requests, which can return: "not possible". With Haskell way the check "possible/not possible" will happen in locations where you change state in State monad: anywhere. So, my initial point is: State monad is about Haskell abstraction problems, not about developer problems. then monadic model. When you create singleton "PlayerBehavior", and have all setters/getters in this singleton and already check (in one place!) changes - next step is to switch from checks to explicit FSM - in the same place. Haskell nothing offers for this. You *can* do it, but monads don't force you and they are about Haskell problems, not mine. Motivation of State monad is not to solve problem but to introduce state mutability in Haskell, this is my point. OK, State monad has helpful side-effect: allows to track change of concrete THIS state, but I can do it with my editor, it's more valuable to Haskell itself, then to me, because no problem to mutate state: Haskell allows it, Haskell also does not guard you to mutate this state anywhere in the application. I'm agree with you 100%. My point is related to accents only, my thesis is: monads have value, but it's small, it's justified in Haskell with its limitation to one abstraction, but I don't need monads in other languages, their value in other languages is super-small (if even exists). So, motivation of monads introduction (for me, sure, I'm very subjective) is to workaround Haskell model, not to make code more safe, I'm absolutely sure: monads nothing to do with safety. It's like to use aspirin with serious medical problem :) they are more lightweight in comparison with remotely messages (which are naturally asynchronous). But "message" abstraction is the same and is more high-level then synchronous/asynchronous dichotomy. It allows you to isolate logic - this is the point. Haskell nothing to do with it: you smear logic anywhere. But now you mark it explicitly. And you have illusion that your code is more safe.
But that's not the point. The point is that Haskell makes it easy to write non-spaghetti.
How? In Haskell I propagate data to a lot of functions (as argument or as hidden argument - in some monad), but with singleton+FSM - you can not do it - data is hidden for you, you can only *call logic*, not *access data*. Logic in Haskell is forced to be smeared between a lot of functions. You *CAN* avoid it, but Haskell does not force you.
BTW you have similar claims about FSMs. Ordinarily they are spaghetti incarnate, but you say they work quite beautifully if done right. (I'm staying sceptical because your arguments in that direction didn't make sense to me, but that might be because I'm lacking background information, and filling in these gaps is really too far off-topic to be of interest.)
I respect your position. Everybody has different experience, and this is basically very good!
We often repeat this: “side-effects”, “tracks”, “safe”. But what does it actually mean? Can I have side-effects in Haskell? Yes. Can I mix side-effects? Yes. But in more difficult way than in ML or F#, for example. What is the benefit?
That it is difficult to accidentally introduce side effects. Or, rather, the problems of side effects. Formally, no Haskell program can have a side effect (unless using UnsafeIO or FFI, but that's not what we're talking about here).
Actually if we look to this from high-level, as to "black box" - we see that it's truth. Haskell allows to have them, to mix them but in different manner.
Yes they will. Some tests will fail if they expect specific output. If the program has a text-based user interface, it will become unusable.
And wise-versa: if I will remove "print" from such tests and add "pure" - they can fail too. IMHO purity/impurity in your example is related to expected behavior and it violation, not to point that "more pure - less bugs". Pure function can violate its contract as well as impure.
Yes they will become buggy. You'll get aliasing issues. And these are the nastiest thing to debug because they will hit you if and only if the program is so large that you don't know all the data flows anymore, and your assumptions about what might be an alias start to fall down. Or not you but maybe the new coworker who doesn't yet know all the parts of the program. That's exactly why data flow is being pushed to being explicit.
So, to avoid this I should not mix read/write monads, to avoid RWST. In this case they should be removed from the language. And monad transformers too. My point is: there is some misunderstanding - I often listen "side-effects are related to errors", "we should avoid them", "they leads to errors", etc, etc, but IMHO pure/impure is needed to FP language compiler, not to me. This is the real motto. Adding of side-effects does not lead to bugs automatically. Mostly it does not. More correct is to say: distinguish of pure/impure code is better to analyze the code, to manipulate with it, to transform it (as programmer I can transform F# code *easy because no monads*, in Haskell *compiler* can transform code easy *because monads*). More important argument for me is example with Free monads. They allows to simulate behavior, to check logic without to involve real external actions (side-effects). Yes, OK, this is argument. It's not explicitly related to buggy code, but it's useful. It remember me homoiconic Lisp code where code can be processed as data, as AST. Actually, I had a big interesting discussion in my company with people which does not like FP (the root why I become to ask such questions to himself). And I got their arguments. I tried to find solid base of mine. But currently I see that I like Haskell solutions itself, and I can not show concrete examples where they are needed in real world, without Haskell specific limitations. I know that those limitations lead to slow compilation, to big and complex compiler, I can not prove that side-effects means "lead to error", or (more interesting) that it's bad to separate side-effects from each other. F#, ML, Lisps have "do-" block and no problem with it. They don't need transformers to mix 2 different effects in one do-block. If you can prove that this decision leads to bugs and Haskell solution does not: it will be bomb :) I think, will be a lot of people in CS which will not agree with you ever. --- Best regards, Paul

Hi Paul, Thus quoth PY on Mon Jul 16 2018 at 09:44 (+0200):
So, motivation of monads introduction (for me, sure, I'm very subjective) is to workaround Haskell model,
Sometimes (e.g., when you want to be able to prove correctness) you actually want to express everything in a small basis of concepts/operations/tools. To me, monads are cool precisely because they allow _explicit_ sequencing of actions in a non-sequential model. By the way, the lift function you mentioned in a previous E-mail does have a value for the programmer: it shows at which level of the monad transformer stack the action takes place (for example). Thus quoth PY on Mon Jul 16 2018 at 09:44 (+0200):
1. State monad allows you to mark change of THIS state, so you can easy find where THIS state is changing (tracking changes) 2. Singleton with FSM allows you to *control* change and to isolate all change logic in one place
1st allows spaghetti, 2nd - does not. 2nd force you to another model:
You seem to like it when your paradigm forces you to do something (just like I do). Now, monads force you to program in a certain way. Furthermore, you may say that FSM are a workaround of the way in which conventional operative languages manipulate state. My point is: whether monads are a workaround or a solution depends on the angle at which you look at the situation. (I think you say something similar too.) - Sergiu Thus quoth PY on Mon Jul 16 2018 at 09:44 (+0200):
So I think if you don't see anybody explicitly mentioning spaghetti issues with State that's for some people it's just hiding in plain sight and they either aren't consciously aware of it, or find that area so self-explaining that they do not think they really need to explain that.
IMHO State monad solution is orthogonal to my point. It does not force you to isolate state change in one place with explicit control, it only marks place where it happens. This info is needed to compiler, not to me. For me - no benefits. Benefit to me - to isolate changing, but with State I can (and all of us do it!) smear change points throughout the code. So, my question is: what exact problem does solve State monad? Which problem? Mine or compiler? Haskell pure lambda-only-abstraction limitation? OK, if we imagine another Haskell, similar to F#, will I need State monad yet? IMHO - no. My point is: State monad is super, in Haskell, and absolutely waste in other languages. I will isolate mutability in another manner: more safe, robust and controllable. Recap: 1. State monad allows you to mark change of THIS state, so you can easy find where THIS state is changing (tracking changes) 2. Singleton with FSM allows you to *control* change and to isolate all change logic in one place
1st allows spaghetti, 2nd - does not. 2nd force you to another model: not changes, but change requests, which can return: "not possible". With Haskell way the check "possible/not possible" will happen in locations where you change state in State monad: anywhere. So, my initial point is: State monad is about Haskell abstraction problems, not about developer problems.
Sorry, but that's not what OO is about. Also, I do not think that you're using general FSMs, else you'd be having transition spaghetti. To be precise, then yes, you are right. But such model forces me more, then monadic model. When you create singleton "PlayerBehavior", and have all setters/getters in this singleton and already check (in one place!) changes - next step is to switch from checks to explicit FSM - in the same place. Haskell nothing offers for this. You *can* do it, but monads don't force you and they are about Haskell problems, not mine. Motivation of State monad is not to solve problem but to introduce state mutability in Haskell, this is my point. OK, State monad has helpful side-effect: allows to track change of concrete THIS state, but I can do it with my editor, it's more valuable to Haskell itself, then to me, because no problem to mutate state: Haskell allows it, Haskell also does not guard you to mutate this state anywhere in the application.
I'm agree with you 100%. My point is related to accents only, my thesis is: monads have value, but it's small, it's justified in Haskell with its limitation to one abstraction, but I don't need monads in other languages, their value in other languages is super-small (if even exists). So, motivation of monads introduction (for me, sure, I'm very subjective) is to workaround Haskell model, not to make code more safe, I'm absolutely sure: monads nothing to do with safety. It's like to use aspirin with serious medical problem :)
Let me repeat: What you call a "message" is just a standard synchronous function call. The one difference is that the caller allows the target type to influence what function gets actually called, and while that's powerful it's quite far from what people assume if you throw that "message" terminology around. I mentioned Erlang early: the same - you send message to FSM which will be lightweight process. Idea of agents and messages is the same in Smalltalk, in QNX, in Erlang, etc, etc... So, "message" does not always mean "synchronous call". For example, QNX "optimizes" local messages, so they are more lightweight in comparison with remotely messages (which are naturally asynchronous). But "message" abstraction is the same and is more high-level then synchronous/asynchronous dichotomy. It allows you to isolate logic - this is the point. Haskell nothing to do with it: you smear logic anywhere. But now you mark it explicitly. And you have illusion that your code is more safe.
But that's not the point. The point is that Haskell makes it easy to write non-spaghetti.
How? In Haskell I propagate data to a lot of functions (as argument or as hidden argument - in some monad), but with singleton+FSM - you can not do it - data is hidden for you, you can only *call logic*, not *access data*. Logic in Haskell is forced to be smeared between a lot of functions. You *CAN* avoid it, but Haskell does not force you.
BTW you have similar claims about FSMs. Ordinarily they are spaghetti incarnate, but you say they work quite beautifully if done right. (I'm staying sceptical because your arguments in that direction didn't make sense to me, but that might be because I'm lacking background information, and filling in these gaps is really too far off-topic to be of interest.)
I respect your position. Everybody has different experience, and this is basically very good!
We often repeat this: “side-effects”, “tracks”, “safe”. But what does it actually mean? Can I have side-effects in Haskell? Yes. Can I mix side-effects? Yes. But in more difficult way than in ML or F#, for example. What is the benefit?
That it is difficult to accidentally introduce side effects. Or, rather, the problems of side effects. Formally, no Haskell program can have a side effect (unless using UnsafeIO or FFI, but that's not what we're talking about here).
Actually if we look to this from high-level, as to "black box" - we see that it's truth. Haskell allows to have them, to mix them but in different manner.
Yes they will. Some tests will fail if they expect specific output. If the program has a text-based user interface, it will become unusable.
And wise-versa: if I will remove "print" from such tests and add "pure" - they can fail too. IMHO purity/impurity in your example is related to expected behavior and it violation, not to point that "more pure - less bugs". Pure function can violate its contract as well as impure.
Yes they will become buggy. You'll get aliasing issues. And these are the nastiest thing to debug because they will hit you if and only if the program is so large that you don't know all the data flows anymore, and your assumptions about what might be an alias start to fall down. Or not you but maybe the new coworker who doesn't yet know all the parts of the program. That's exactly why data flow is being pushed to being explicit.
So, to avoid this I should not mix read/write monads, to avoid RWST. In this case they should be removed from the language. And monad transformers too. My point is: there is some misunderstanding - I often listen "side-effects are related to errors", "we should avoid them", "they leads to errors", etc, etc, but IMHO pure/impure is needed to FP language compiler, not to me. This is the real motto. Adding of side-effects does not lead to bugs automatically. Mostly it does not. More correct is to say: distinguish of pure/impure code is better to analyze the code, to manipulate with it, to transform it (as programmer I can transform F# code *easy because no monads*, in Haskell *compiler* can transform code easy *because monads*). More important argument for me is example with Free monads. They allows to simulate behavior, to check logic without to involve real external actions (side-effects). Yes, OK, this is argument. It's not explicitly related to buggy code, but it's useful. It remember me homoiconic Lisp code where code can be processed as data, as AST.
Actually, I had a big interesting discussion in my company with people which does not like FP (the root why I become to ask such questions to himself). And I got their arguments. I tried to find solid base of mine. But currently I see that I like Haskell solutions itself, and I can not show concrete examples where they are needed in real world, without Haskell specific limitations. I know that those limitations lead to slow compilation, to big and complex compiler, I can not prove that side-effects means "lead to error", or (more interesting) that it's bad to separate side-effects from each other. F#, ML, Lisps have "do-" block and no problem with it. They don't need transformers to mix 2 different effects in one do-block. If you can prove that this decision leads to bugs and Haskell solution does not: it will be bomb :) I think, will be a lot of people in CS which will not agree with you ever.
--- Best regards, Paul
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

I actually lost interest, because you are kind of trying to tell me that because of your lack of _familiarity_ with monads, _my_ benefits do not count. I cannot agree with this statement and with this approach. But since there were questions, I'll answer and do some small clarification.
I asked because never tried Eta. So, if you are right, seems no reasons to develop Eta... I am not sure why you are bringing Eta as an example to this discussion just to point later that you have no experience with it. The point of Eta is to run Haskell on JVM. Haskell as of GHC, and not some hypothetical hybrid language (that would be Scala). If you want a decent language, and you must run on JVM then you use Eta. If you don't need to run on JVM - you don't use Eta.
No better definition then original: https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/computatio... You see, they are different. Now it is your turn to read this link. The second sentence on that link says: "They can be used to provide a convenient syntax for *monads*, a functional programming feature that can be used to manage data, control, and side effects in functional programs." The emphasis on _monads_ isn't mine, it is original. The computation expressions are _monadic_ (when they obey laws, but that link doesn't say anything about laws, unfortunately).
Something like this: user?.Phone?.Company?.Name??"missing"; ? Still no.
It does not force you to isolate state change in one place with explicit control, it only marks place where it happens.
If I add “print” for debug purpose in some subroutines, will they become buggy? No. Then you add this code inside a transaction and voila - yes, you do have a bug. In fact, my colleague who used to work at Nasdaq, had a story about exactly
processAttack :: (MonadState s m, HasBpsCounters s) => Attack -> m Result No benefits for you, tons of benefits for me: I guarantee that this function can only be called if there is access to some BPS counters. I guarantee that this function, and whatever if uses inside itself, can never touch or even look at anything else in my state. I can guarantee that it doesn't cause any other effects like it doesn't call something which calls something which prints to console or writes to DB. And that if during code evolution/refactoring someone does something like that, then it won't compile. this: Once upon a time there was a beautiful code that lived in a transaction. Then someone accidentally introduced a side effect to one of the functions. This function was called from something within that transaction. The bug was noticed after some time when it has done some damage. In Haskell, you can't do IO in STM, so that wouldn't be possible.
Haskell also does not guard you to mutate this state anywhere in the application. I think my example above proves otherwise: in Haskell, I can granularly control who can update which part of the state, which makes your statement invalid.
monads have value, but it's small ... their value in other languages is super-small Again, a VERY bold statement I have to disagree with, once again. F# workflows are monadic, C# LINQ is precisely modelled by E. Meijer as a list monad. They add great value.
With this, I rest my case, thanks for the discussion.
Regards,
Alexey.
On Mon, Jul 16, 2018 at 5:45 PM PY
So I think if you don't see anybody explicitly mentioning spaghetti issues with State that's for some people it's just hiding in plain sight and they either aren't consciously aware of it, or find that area so self-explaining that they do not think they really need to explain that.
IMHO State monad solution is orthogonal to my point. It does not force you to isolate state change in one place with explicit control, it only marks place where it happens. This info is needed to compiler, not to me. For me - no benefits. Benefit to me - to isolate changing, but with State I can (and all of us do it!) smear change points throughout the code. So, my question is: what exact problem does solve State monad? Which problem? Mine or compiler? Haskell pure lambda-only-abstraction limitation? OK, if we imagine another Haskell, similar to F#, will I need State monad yet? IMHO - no. My point is: State monad is super, in Haskell, and absolutely waste in other languages. I will isolate mutability in another manner: more safe, robust and controllable. Recap: 1. State monad allows you to mark change of THIS state, so you can easy find where THIS state is changing (tracking changes) 2. Singleton with FSM allows you to *control* change and to isolate all change logic in one place
1st allows spaghetti, 2nd - does not. 2nd force you to another model: not changes, but change requests, which can return: "not possible". With Haskell way the check "possible/not possible" will happen in locations where you change state in State monad: anywhere. So, my initial point is: State monad is about Haskell abstraction problems, not about developer problems.
Sorry, but that's not what OO is about. Also, I do not think that you're using general FSMs, else you'd be having transition spaghetti. To be precise, then yes, you are right. But such model forces me more, then monadic model. When you create singleton "PlayerBehavior", and have all setters/getters in this singleton and already check (in one place!) changes - next step is to switch from checks to explicit FSM - in the same place. Haskell nothing offers for this. You *can* do it, but monads don't force you and they are about Haskell problems, not mine. Motivation of State monad is not to solve problem but to introduce state mutability in Haskell, this is my point. OK, State monad has helpful side-effect: allows to track change of concrete THIS state, but I can do it with my editor, it's more valuable to Haskell itself, then to me, because no problem to mutate state: Haskell allows it, Haskell also does not guard you to mutate this state anywhere in the application.
I'm agree with you 100%. My point is related to accents only, my thesis is: monads have value, but it's small, it's justified in Haskell with its limitation to one abstraction, but I don't need monads in other languages, their value in other languages is super-small (if even exists). So, motivation of monads introduction (for me, sure, I'm very subjective) is to workaround Haskell model, not to make code more safe, I'm absolutely sure: monads nothing to do with safety. It's like to use aspirin with serious medical problem :)
Let me repeat: What you call a "message" is just a standard synchronous function call. The one difference is that the caller allows the target type to influence what function gets actually called, and while that's powerful it's quite far from what people assume if you throw that "message" terminology around. I mentioned Erlang early: the same - you send message to FSM which will be lightweight process. Idea of agents and messages is the same in Smalltalk, in QNX, in Erlang, etc, etc... So, "message" does not always mean "synchronous call". For example, QNX "optimizes" local messages, so they are more lightweight in comparison with remotely messages (which are naturally asynchronous). But "message" abstraction is the same and is more high-level then synchronous/asynchronous dichotomy. It allows you to isolate logic - this is the point. Haskell nothing to do with it: you smear logic anywhere. But now you mark it explicitly. And you have illusion that your code is more safe.
But that's not the point. The point is that Haskell makes it easy to write non-spaghetti.
How? In Haskell I propagate data to a lot of functions (as argument or as hidden argument - in some monad), but with singleton+FSM - you can not do it - data is hidden for you, you can only *call logic*, not *access data*. Logic in Haskell is forced to be smeared between a lot of functions. You *CAN* avoid it, but Haskell does not force you.
BTW you have similar claims about FSMs. Ordinarily they are spaghetti incarnate, but you say they work quite beautifully if done right. (I'm staying sceptical because your arguments in that direction didn't make sense to me, but that might be because I'm lacking background information, and filling in these gaps is really too far off-topic to be of interest.)
I respect your position. Everybody has different experience, and this is basically very good!
We often repeat this: “side-effects”, “tracks”, “safe”. But what does it actually mean? Can I have side-effects in Haskell? Yes. Can I mix side-effects? Yes. But in more difficult way than in ML or F#, for example. What is the benefit?
That it is difficult to accidentally introduce side effects. Or, rather, the problems of side effects. Formally, no Haskell program can have a side effect (unless using UnsafeIO or FFI, but that's not what we're talking about here).
Actually if we look to this from high-level, as to "black box" - we see that it's truth. Haskell allows to have them, to mix them but in different manner.
Yes they will. Some tests will fail if they expect specific output. If the program has a text-based user interface, it will become unusable.
And wise-versa: if I will remove "print" from such tests and add "pure" - they can fail too. IMHO purity/impurity in your example is related to expected behavior and it violation, not to point that "more pure - less bugs". Pure function can violate its contract as well as impure.
Yes they will become buggy. You'll get aliasing issues. And these are the nastiest thing to debug because they will hit you if and only if the program is so large that you don't know all the data flows anymore, and your assumptions about what might be an alias start to fall down. Or not you but maybe the new coworker who doesn't yet know all the parts of the program. That's exactly why data flow is being pushed to being explicit.
So, to avoid this I should not mix read/write monads, to avoid RWST. In this case they should be removed from the language. And monad transformers too. My point is: there is some misunderstanding - I often listen "side-effects are related to errors", "we should avoid them", "they leads to errors", etc, etc, but IMHO pure/impure is needed to FP language compiler, not to me. This is the real motto. Adding of side-effects does not lead to bugs automatically. Mostly it does not. More correct is to say: distinguish of pure/impure code is better to analyze the code, to manipulate with it, to transform it (as programmer I can transform F# code *easy because no monads*, in Haskell *compiler* can transform code easy *because monads*). More important argument for me is example with Free monads. They allows to simulate behavior, to check logic without to involve real external actions (side-effects). Yes, OK, this is argument. It's not explicitly related to buggy code, but it's useful. It remember me homoiconic Lisp code where code can be processed as data, as AST.
Actually, I had a big interesting discussion in my company with people which does not like FP (the root why I become to ask such questions to himself). And I got their arguments. I tried to find solid base of mine. But currently I see that I like Haskell solutions itself, and I can not show concrete examples where they are needed in real world, without Haskell specific limitations. I know that those limitations lead to slow compilation, to big and complex compiler, I can not prove that side-effects means "lead to error", or (more interesting) that it's bad to separate side-effects from each other. F#, ML, Lisps have "do-" block and no problem with it. They don't need transformers to mix 2 different effects in one do-block. If you can prove that this decision leads to bugs and Haskell solution does not: it will be bomb :) I think, will be a lot of people in CS which will not agree with you ever.
--- Best regards, Paul
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On Fri, Jul 13, 2018 at 08:49:02PM +0200, Joachim Durchholz wrote:
The other issue is that Haskell's extensions make it more difficult to have library code interoperate.
Do they? Can you give any examples?

Am 12.07.2018 um 12:28 schrieb Vanessa McHale:
I am not familiar with F#.
F# is OCaml, ported and adapted to .net. Or at least it started that way; I don't know how far they have diverged. Regards, Jo

quoth Joachim Durchholz
F# is OCaml, ported and adapted to .net. Or at least it started that way; I don't know how far they have diverged.
Rather far. Don't quote me, I know only what I've read about F# and am not sure I know that, but lots of Objective CAML was tossed, and of course there's a lot of .NET. The syntax apparently uses significant white space, so that's one huge step forward if it works - the standard Objective CAML syntax is quite evil in that respect. But I believe the interesting parts are gone: modules, and the OOP implementation, which I have the impression might be one of the more respectable OO designs. Donn

ML structures/signatures/functors are gone IMHO due to namespaces and F# modules. As I know attempts to be more close to Haskell fail, so Microsoft decides to switch to ML as own dialect. All decisions were taken because of .NET architecture, sure (and OOP is close to .NET, original Ocaml OOP was not easy adapting). But for me it looks cool. For example, syntax for objects looks canonical, not like something alien for ML. My IMHO is that Microsoft takes right direction in F# as well as C#.
From: Donn Cave
Sent: 12 июля 2018 г. 20:21
To: haskell-cafe@haskell.org
Subject: Re: [Haskell-cafe] Investing in languages (Was: What is yourfavourite Haskell "aha" moment?)
quoth Joachim Durchholz
F# is OCaml, ported and adapted to .net. Or at least it started that way; I don't know how far they have diverged.
Rather far. Don't quote me, I know only what I've read about F# and am not sure I know that, but lots of Objective CAML was tossed, and of course there's a lot of .NET. The syntax apparently uses significant white space, so that's one huge step forward if it works - the standard Objective CAML syntax is quite evil in that respect. But I believe the interesting parts are gone: modules, and the OOP implementation, which I have the impression might be one of the more respectable OO designs. Donn _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Hi Brett, I see room for other languages, with other potential paradigms, but of all those implementations that I know of, I have never seen anything with a practical result. They all fall for the Fallacy of False Moderation in my opinion. Perhaps this makes me a hardliner, but until something more useful comes along, it's going to have to stay that way. The code I am writing at this very moment cannot even exist in F# or Rust. PS: I thought I had already sent this, sorry. On 07/12/2018 04:40 PM, Brett Gilio wrote:
Tony,
I am curious on your attitude towards multi-paradigm and ML-like languages. I agree that functional programming is easily the better of the bundle in many forms of application logic and elegance (which is why I have come to love Scheme and Haskell), but do you see any room for those languages like F# or Rust which have large capacities for FP but are either functional-first (but not pure) or a hybrid?
Brett Gilio
On 07/12/2018 01:35 AM, Tony Morris wrote:
I used to teach undergrad OOP nonsense. I have been teaching FP for 15 years. [^1]
The latter is *way* easier. Existing programmers are more difficult than children, but still way easier to teach FP than all the other stuff.
[^1]: Canberra anyone? https://qfpl.io/posts/2018-canberra-intro-to-fp/
On 07/12/2018 04:23 PM, Joachim Durchholz wrote:
Am 11.07.2018 um 16:36 schrieb Damian Nadales:
I speak only from my own narrow perspective. I'd say programming is hard, but functional programming is harder.
Actually it's pretty much the opposite, I hear from teachers.
Maybe that's why Java replaced Haskell in some universities curricula The considerations are marketable skills. A considerable fraction of students is looking at the curriculum and at job offers, and if they find that the lists don't match, they will go to another university. Also, industry keeps lobbying for teaching skills that they can use. Industry can give money to universities so this gives them influence on the curriculum (and only if they get time to talk the topic over with the dean). This aspect can vary considerably between countries, depending on how much money the universities tend to acquire from industry.
https://chrisdone.com/posts/dijkstra-haskell-java. For some reason most programmers I know are not scared of learning OO, but they fear functional programming.
Programmers were *very* scared of OO in the nineties. It took roughly a decade or two (depending on where you put the starting point) to get comfortable with OO.
I think the reason might be that OO concepts
like inheritance and passing messages between objects are a bit more concrete and easier to grasp (when you present toy examples at least).
OO is about how to deal with having to pack everything into its own class (and how to arrange stuff into classes). Functional is about how to deal with the inability to update. Here, the functional camp actually has the easier job, because you can just tell people to just write code that creates new data objects and get over with it. Performance concerns can be handwaved away by saying that the compiler is hyper-aggressive, and "you can look at the intermediate code if you suspect the compiler is the issue". (Functional is a bit similar to SQL here, but the SQL optimizers are much less competent than GHC at detecting optimization opportunities.)
Then you have design patterns, which have intuitive names and give some very general guidelines that one can try after reading them (and add his/her own personal twist). I doubt people can read the Monad laws and make any sense out of them at the first try.
That's true, but much of the misconceptions around monads from the first days have been cleared up. But yes the monad laws are too hard to read. OTOH you won't be able to read the Tree code in the JDK without the explanations either. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On 07/12/2018 06:36 PM, Tony Morris wrote:
Hi Brett, I see room for other languages, with other potential paradigms, but of all those implementations that I know of, I have never seen anything with a practical result.
I agree with your thesis here, the facts are pretty clear that even in multi-paradigm approach there is still a clearer and more elegant way to approach the programmatic logic. My interest in F# and other more "multi-paradigm" languages (even C++, nowadays) came to a... shall we say, bridge point when I discovered the power of Scheme implemementations and Haskell/Idris.
PS: I thought I had already sent this, sorry.
You had, it was my mistake that I responded off of the list. -- Brett Gilio brettg@posteo.net | bmg@member.fsf.org Free Software -- Free Society!

the monad laws are too hard to read.
FWIW the monad laws are not hard to *read* if written in this form return >=> f = f f >=> return = f (f >=> g) >=> h = f >=> (g >=> h) (Whether they're easy to *understand* in that form is another matter.)

On Jul 12, 2018, at 01:42, Tom Ellis
wrote: the monad laws are too hard to read.
FWIW the monad laws are not hard to *read* if written in this form
return >=> f = f f >=> return = f (f >=> g) >=> h = f >=> (g >=> h)
(Whether they're easy to *understand* in that form is another matter.)
Here is another formulation of the monad laws that is less frequently discussed than either of the ones using bind or Kleisli composition: (1) join . return = id (2) join . fmap return = id (3) join . join = join . map join These laws map less obviously to the common ones, but I think they are easier to understand (not least because `join` is closer to the “essence” of what a monad is than >>=). (1) and (2) describe the intuitive notion that adding a layer and squashing it should be the identity function, whether you add the new layer on the outside or on the inside. Likewise, (3) states that if you have three layers and squash them all together, it doesn’t matter whether you squash the inner two or outer two together first. (Credit goes to HTNW on Stack Overflow for explaining this to me. https://stackoverflow.com/a/45829556/465378)

Hi, And if you draw the diagrams corresponding to the monad laws for join and return they have exactly the same shape as monoid laws (left and righ unitality and associativity), just the Cartesian product is exchanged for functor composition. It's a nice exercise on it's own to observe this fact so I will leave it here. Regards, Marcin Sent from ProtonMail mobile -------- Original Message -------- On 12 Jul 2018, 09:06, Alexis King wrote:
On Jul 12, 2018, at 01:42, Tom Ellis
wrote: the monad laws are too hard to read.
FWIW the monad laws are not hard to *read* if written in this form
return >=> f = f f >=> return = f (f >=> g) >=> h = f >=> (g >=> h)
(Whether they're easy to *understand* in that form is another matter.)
Here is another formulation of the monad laws that is less frequently discussed than either of the ones using bind or Kleisli composition:
(1) join . return = id (2) join . fmap return = id (3) join . join = join . map join
These laws map less obviously to the common ones, but I think they are easier to understand (not least because `join` is closer to the “essence” of what a monad is than >>=). (1) and (2) describe the intuitive notion that adding a layer and squashing it should be the identity function, whether you add the new layer on the outside or on the inside. Likewise, (3) states that if you have three layers and squash them all together, it doesn’t matter whether you squash the inner two or outer two together first.
(Credit goes to HTNW on Stack Overflow for explaining this to me. https://stackoverflow.com/a/45829556/465378)
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On Thu, Jul 12, 2018 at 1:24 AM Joachim Durchholz
Maybe that's why Java replaced Haskell in some universities curricula
The considerations are marketable skills.
That's certainly a consideration. I doubt it weighs particularly heavy on the departments at Austin, Berkeley, or MIT however. Their graduates will be plenty marketable regardless of the languages they see most as undergraduates. Another concern is who the courses serve. At my institution, our intro programming courses serve both electrical engineering and computer science, so we have to include the EE folks needs in our considerations. Many institutions (Harvey Mudd has done an excellent job of this) are also serving the wider data science community with the same introductory course (albeit different sections). This is great news for exposing people to computer science, but may impose additional requirements on how you teach. Yet another concern is teaching load. At my (relatively small) state institution, we teach something like 4 sections of programming I, and at least another two sections of intro CS for non-majors. If I were to champion updating that part of the curriculum, it'd be on me to teach those sections, or to work with/train instructors to be able to do so. This is a non-trivial time commitment, and there's not necessarily any guarantee that whoever ended up with the intro sequence after I ran out of energy/time/employment would have the same priorities. A final note is that, just because a department chooses to teach their introductory course in an imperative language doesn't mean they're teaching C-style procedural programming. Look at Harvey Mudd's CS 5 book: https://www.cs.hmc.edu/csforall/. It's taught in Python, but the first half of the book focuses on functional programming paradigms. /g

Am 12.07.2018 um 19:50 schrieb J. Garrett Morris:
On Thu, Jul 12, 2018 at 1:24 AM Joachim Durchholz
wrote: Maybe that's why Java replaced Haskell in some universities curricula
The considerations are marketable skills.
That's certainly a consideration. I doubt it weighs particularly heavy on the departments at Austin, Berkeley, or MIT however. Their graduates will be plenty marketable regardless of the languages they see most as undergraduates.
Yes, but 90% of universities are not elite, and since these educate 90% of the students, we still end with a prevalence of marketable programming language skills. Not that I'm criticizing this. Not too strongly anyway; there are positive and negative aspects to this. (Thanks for the rest, leaving it out because I have nothing of value to add, not because I disagree.)

Yes, but 90% of universities are not elite, and since these educate 90% of the students, we still end with a prevalence of marketable programming language skills.
BTW, the choice of programming language is also often influenced by the whole CS faculty members (not just the programming-language guys), and guess what: most of them don't use anything like Haskell in their projects. Stefan

Dear Stefan,
BTW, the choice of programming language is also often influenced by the whole CS faculty members (not just the programming-language guys), and guess what: most of them don't use anything like Haskell in their projects.
I suspect this varies from institution to institution. My experience is that the faculty who most often teach a course have the most to say regarding its subject matter. Of course, the decision isn’t a simple matter of personal preference, as there are other collegial considerations, e.g., preparing students for subsequent classes, and in making it easy to recruit other faculty and teaching staff to cover other sections. What this has meant at Chicago is that our mainstream intro course starts with Typed Racket, even though John Reppy (who regularly teaches it) would naturally prefer SMLNJ. On the other hand, the course was Scheme/Racket before, and the movement to Typed Racket is consistent with both the longstanding traditions of the course (which go back to SICP), and the idiosyncrasies of its faculty lead. On the other hand, I teach the honors version, and for many years the only section of it. Having no one to satisfy but myself, I settled on Haskell. These days, there’s a second section, and Ravi Chugh and I collaborate. FWIW, I’m not a PL guy, but rather am a theoretician via mathematical logic and computability theory. We get less push-back than you’d expect, as the quality of our students is taken as indicative of the quality of our choices. My public justification for teaching Haskell is that the honors students I encounter either already have programming experience (usually Java, these days), or are hard core math geeks. Both have something substantial to add to the course, and their disadvantages and advantages largely offset. It makes for a productive community. My private justification includes the public one, but adds the very real consideration that by teaching Haskell, I continue to grow with the language and its practice. Ravi and I end up re-writing about 1/4th to 1/3rd of the course every year, which keeps it and us fresh. Peace, Stu --------------- Stuart A. Kurtz Professor, Department of Computer Science and the College Master, Physical Sciences Collegiate Division The University of Chicago

https://github.com/functionaljava/functionaljava/blob/6ac5f9547dbb1f0ca3777b... This function was written over 15 years ago. I disagree with this reasoning that though a lot of programmers are unwilling to make the investment to learn (true), that this has consequences for the use of Haskell (not true). Those same programmers have barely learned Java and yet Java is pervasive through our industry. I learned this (nobody knows Java) when I was working on implementing the JVM and I asked myself, "if I am to implement this Java thing, does anyone out there actually know it?" I quickly learned that the answer is no. In fact, I wrote a test back then (~2003), subtitled, "but do you even know the basics of Java?" and the best score to this day on that test is 4/10 (twice). I wrote that test to debunk the common protest, "but where will I hire Java programmers?!" The correct answer is nowhere, they do not exist. Sorry for the diversion. On 07/12/2018 12:03 AM, Bryan Richter wrote:
cartProdN:: [[a]] -> [[a]] cartProdN= sequence
This also made me realize of two things: 0. Haskell will never be mainstream, because there are not a lot of programmers out there who are willing to do the investment required for learning the necessary concepts to understand and write code like the one shown above.

Am 11.07.2018 um 15:36 schrieb Damian Nadales:
0. Haskell will never be mainstream, because there are not a lot of programmers out there who are willing to do the investment required for learning the necessary concepts to understand and write code like the one shown above.
For an uninitiated, learning Java takes about the same amount of time as learning Haskell. Learning monads is like learning Spring or one of the gazillion other library frameworks out there in the Java world, so even that isn't much of a difference. If you're a programmer already, then changes are that Haskell is indeed harder to learn than most other languages you might want to learn, because the overlap of standard concepts and techniques is smaller. IOW it's harder to teach Haskell to programmers than to nonprogrammers. Which also means that our conclusions from our experience as programmers do not predict much about Haskell's future.
1. Haskell has rendered me unemployable for almost all jobs that do not involve Haskell codebases.
This can be avoided if you find a job that has a 50-50 split of Haskell and non-Haskell work, but that's not always possible and I agree this can become a huge problem. Regards, Jo

I have several short examples that I quite like: #1: changes a probability density function into a cumulative density function | cdf :: (Num a) => [a] -> [a] cdf = drop 2 . (scanl (+) 0) . ((:) 0) | #2: enumerate all strings on an alphabet (this uses laziness!) | allStrings :: [a] -> [[a]] allStrings = sequence <=< (inits . repeat) | || #3: enumerate the Fibonacci numbers (this one uses laziness too) | fibonacci :: (Integral a) => [a] fibonacci = 1 : 1 : zipWith (+) fibonacci (tail fibonacci) | #4: Return all subsets of a list | allSubsets :: [a] -> [[a]] allSubsets = filterM (pure [True, False]) | I also have two blog posts I wrote that contain lots of short examples. The first contains lots of short-but-interesting programs and the second contains examples of how expressive Haskell is (by doing the same thing multiple times): http://blog.vmchale.com/article/haskell-aphorisms http://blog.vmchale.com/article/sum On 07/11/2018 07:10 AM, Simon Peyton Jones via Haskell-Cafe wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: /why should I even care about Haskell/? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

I'm more on the beginner side,
but reading up on the difference between print and putStrLn, and pure and
return.
Also, understanding the Maybe Monad.
Finally, reading through some probability randomness examples of rolling
dice (for the chapter on State Monad) and seeing how mathematical they look
(ie they are similar to how I would reason about them just knowing
statistics).
Oh yeah..and folds. Fold all the things.
- Krystal
On 11 July 2018 at 06:46, Vanessa McHale
I have several short examples that I quite like:
#1: changes a probability density function into a cumulative density function
cdf :: (Num a) => [a] -> [a] cdf = drop 2 . (scanl (+) 0) . ((:) 0)
#2: enumerate all strings on an alphabet (this uses laziness!)
allStrings :: [a] -> [[a]]allStrings = sequence <=< (inits . repeat)
#3: enumerate the Fibonacci numbers (this one uses laziness too)
fibonacci :: (Integral a) => [a]fibonacci = 1 : 1 : zipWith (+) fibonacci (tail fibonacci)
#4: Return all subsets of a list
allSubsets :: [a] -> [[a]]allSubsets = filterM (pure [True, False])
I also have two blog posts I wrote that contain lots of short examples. The first contains lots of short-but-interesting programs and the second contains examples of how expressive Haskell is (by doing the same thing multiple times):
http://blog.vmchale.com/article/haskell-aphorisms http://blog.vmchale.com/article/sum
On 07/11/2018 07:10 AM, Simon Peyton Jones via Haskell-Cafe wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to:http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Vanessa, I added your blog to my bookmarks :) Thanks 11.07.2018 16:46, Vanessa McHale wrote:
I have several short examples that I quite like:
#1: changes a probability density function into a cumulative density function
|
cdf :: (Num a) => [a] -> [a] cdf = drop 2 . (scanl (+) 0) . ((:) 0)
| #2: enumerate all strings on an alphabet (this uses laziness!) |
allStrings :: [a] -> [[a]] allStrings = sequence <=< (inits . repeat)
|
||
#3: enumerate the Fibonacci numbers (this one uses laziness too) |
fibonacci :: (Integral a) => [a] fibonacci = 1 : 1 : zipWith (+) fibonacci (tail fibonacci)
| #4: Return all subsets of a list |
allSubsets :: [a] -> [[a]] allSubsets = filterM (pure [True, False])
|
I also have two blog posts I wrote that contain lots of short examples. The first contains lots of short-but-interesting programs and the second contains examples of how expressive Haskell is (by doing the same thing multiple times):
http://blog.vmchale.com/article/haskell-aphorisms http://blog.vmchale.com/article/sum
On 07/11/2018 07:10 AM, Simon Peyton Jones via Haskell-Cafe wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: /why should I even care about Haskell/? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Hi, On 07/11/2018 02:46 PM, Vanessa McHale wrote:
#2: enumerate all strings on an alphabet (this uses laziness!)
allStrings :: [a] -> [[a]] allStrings = sequence <=< (inits . repeat)
Neat! I find the following alternative appealing too, as it in essence just states a recursive equation that says what it means for a list to be a list of all strings over the given alphabet: allStrings alphabet = xss where xss = [] : [ x : xs | xs <- xss, x <- alphabet ] (Admittedly, one has to be careful with the ordering of the generators, or the order in which the strings are enumerated becomes less useful.) This capability of declaratively stating an equation that characterises the sought answer is another perspective on why techniques like dynamic programming is such a great fit for lazy languages, as pointed out by Jake. One of my favourite examples is finding a minimal length triangulation of a polygon where an elegant solution is obtained by just transliterating the defining equations from a classic textbook on data structures and algorithms (Aho, Hopcroft, Ullman 1983). Attribute grammar evaluation is another great application of laziness in a similar vein. Best, /Henrik This message and any attachment are intended solely for the addressee and may contain confidential information. If you have received this message in error, please contact the sender and delete the email and attachment. Any views or opinions expressed by the author of this email do not necessarily reflect the views of the University of Nottingham. Email communications with the University of Nottingham may be monitored where permitted by law.

In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
I don't much like the monad solution for side-effects, but if those guys might have some knowledge of the horror of concurrent programming with locks, the STM system would be a good candidate. Stefan

I find it quite elegant! The fact that you can define the IO monad in Haskell was quite a revelation. And it's especially nice when paired with a demonstration of C FFI (where you might *need* to sequence side effects such as freeing a value after it has been read). newtypeIO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#...a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#...=IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#...(State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S...RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R...->(#State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S...RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R...,a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#...#)) On 07/11/2018 09:14 AM, Stefan Monnier wrote:
In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness. I don't much like the monad solution for side-effects, but if those guys might have some knowledge of the horror of concurrent programming with locks, the STM system would be a good candidate.
Stefan
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- *Vanessa McHale* Functional Compiler Engineer | Chicago, IL Website: www.iohk.io http://iohk.io Twitter: @vamchale PGP Key ID: 4209B7B5 Input Output http://iohk.io Twitter https://twitter.com/InputOutputHK Github https://github.com/input-output-hk LinkedIn https://www.linkedin.com/company/input-output-global This e-mail and any file transmitted with it are confidential and intended solely for the use of the recipient(s) to whom it is addressed. Dissemination, distribution, and/or copying of the transmission by anyone other than the intended recipient(s) is prohibited. If you have received this transmission in error please notify IOHK immediately and delete it from your system. E-mail transmissions cannot be guaranteed to be secure or error free. We do not accept liability for any loss, damage, or error arising from this transmission

The fact that you can define the IO monad in Haskell was quite a revelation.
But it's *not* a fact. It's a lie. And one of the most devious sort, since
the source code appears to agree. The purported definition couldn't
possibly explain concurrency.
On Wed, Jul 11, 2018 at 7:21 AM, Vanessa McHale
I find it quite elegant! The fact that you can define the IO monad in Haskell was quite a revelation. And it's especially nice when paired with a demonstration of C FFI (where you might *need* to sequence side effects such as freeing a value after it has been read).
newtype IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... = IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... (State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S... RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R... -> (# State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S... RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R..., a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... #))
On 07/11/2018 09:14 AM, Stefan Monnier wrote:
In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
I don't much like the monad solution for side-effects, but if those guys might have some knowledge of the horror of concurrent programming with locks, the STM system would be a good candidate.
Stefan
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to:http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
--
*Vanessa McHale* Functional Compiler Engineer | Chicago, IL
Website: www.iohk.io http://iohk.io Twitter: @vamchale PGP Key ID: 4209B7B5
[image: Input Output] http://iohk.io
[image: Twitter] https://twitter.com/InputOutputHK [image: Github] https://github.com/input-output-hk [image: LinkedIn] https://www.linkedin.com/company/input-output-global
This e-mail and any file transmitted with it are confidential and intended solely for the use of the recipient(s) to whom it is addressed. Dissemination, distribution, and/or copying of the transmission by anyone other than the intended recipient(s) is prohibited. If you have received this transmission in error please notify IOHK immediately and delete it from your system. E-mail transmissions cannot be guaranteed to be secure or error free. We do not accept liability for any loss, damage, or error arising from this transmission
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

The purported definition also mixes very badly with strictness analysis.
For IO, m >>= f should be *lazy* in f (e.g., print 3 >> undefined should
print 3 before throwing an exception) but the purported definition would
suggest it's *strict* (print 3 >> undefined is equivalent to undefined).
On Wed, Jul 11, 2018, 6:20 PM Conal Elliott
The fact that you can define the IO monad in Haskell was quite a revelation.
But it's *not* a fact. It's a lie. And one of the most devious sort, since the source code appears to agree. The purported definition couldn't possibly explain concurrency.
On Wed, Jul 11, 2018 at 7:21 AM, Vanessa McHale
wrote: I find it quite elegant! The fact that you can define the IO monad in Haskell was quite a revelation. And it's especially nice when paired with a demonstration of C FFI (where you might *need* to sequence side effects such as freeing a value after it has been read).
newtype IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... = IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... (State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S... RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R... -> (# State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S... RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R..., a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... #))
On 07/11/2018 09:14 AM, Stefan Monnier wrote:
In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
I don't much like the monad solution for side-effects, but if those guys might have some knowledge of the horror of concurrent programming with locks, the STM system would be a good candidate.
Stefan
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to:http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
--
*Vanessa McHale* Functional Compiler Engineer | Chicago, IL
Website: www.iohk.io http://iohk.io Twitter: @vamchale PGP Key ID: 4209B7B5
[image: Input Output] http://iohk.io
[image: Twitter] https://twitter.com/InputOutputHK [image: Github] https://github.com/input-output-hk [image: LinkedIn] https://www.linkedin.com/company/input-output-global
This e-mail and any file transmitted with it are confidential and intended solely for the use of the recipient(s) to whom it is addressed. Dissemination, distribution, and/or copying of the transmission by anyone other than the intended recipient(s) is prohibited. If you have received this transmission in error please notify IOHK immediately and delete it from your system. E-mail transmissions cannot be guaranteed to be secure or error free. We do not accept liability for any loss, damage, or error arising from this transmission
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

I'm not sure I follow. Do you mean that IO is not a monad because equivalence of values cannot be defined? Or is it something deeper? On 07/11/2018 05:19 PM, Conal Elliott wrote:
The fact that you can define the IO monad in Haskell was quite a revelation.
But it's *not* a fact. It's a lie. And one of the most devious sort, since the source code appears to agree. The purported definition couldn't possibly explain concurrency.
On Wed, Jul 11, 2018 at 7:21 AM, Vanessa McHale
mailto:vanessa.mchale@iohk.io> wrote: I find it quite elegant! The fact that you can define the IO monad in Haskell was quite a revelation. And it's especially nice when paired with a demonstration of C FFI (where you might *need* to sequence side effects such as freeing a value after it has been read).
newtypeIO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#...a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#...=IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#...(State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S...RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R...->(#State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S...RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R...,a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#...#))
On 07/11/2018 09:14 AM, Stefan Monnier wrote:
In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
I don't much like the monad solution for side-effects, but if those guys might have some knowledge of the horror of concurrent programming with locks, the STM system would be a good candidate.
Stefan
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
--
*Vanessa McHale* Functional Compiler Engineer | Chicago, IL
Website: www.iohk.io http://iohk.io Twitter: @vamchale PGP Key ID: 4209B7B5
Input Output http://iohk.io
Twitter https://twitter.com/InputOutputHK Github https://github.com/input-output-hk LinkedIn https://www.linkedin.com/company/input-output-global
This e-mail and any file transmitted with it are confidential and intended solely for the use of the recipient(s) to whom it is addressed. Dissemination, distribution, and/or copying of the transmission by anyone other than the intended recipient(s) is prohibited. If you have received this transmission in error please notify IOHK immediately and delete it from your system. E-mail transmissions cannot be guaranteed to be secure or error free. We do not accept liability for any loss, damage, or error arising from this transmission
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

In the presence of concurrency, IO requires runtime support and can't be
simply defined in Haskell proper.
On Wed, Jul 11, 2018 at 9:59 PM Vanessa McHale
I'm not sure I follow. Do you mean that IO is not a monad because equivalence of values cannot be defined? Or is it something deeper? On 07/11/2018 05:19 PM, Conal Elliott wrote:
The fact that you can define the IO monad in Haskell was quite a revelation.
But it's *not* a fact. It's a lie. And one of the most devious sort, since the source code appears to agree. The purported definition couldn't possibly explain concurrency.
On Wed, Jul 11, 2018 at 7:21 AM, Vanessa McHale
wrote: I find it quite elegant! The fact that you can define the IO monad in Haskell was quite a revelation. And it's especially nice when paired with a demonstration of C FFI (where you might *need* to sequence side effects such as freeing a value after it has been read).
newtype IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... = IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... (State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S... RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R... -> (# State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S... RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R..., a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... #))
On 07/11/2018 09:14 AM, Stefan Monnier wrote:
In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
I don't much like the monad solution for side-effects, but if those guys might have some knowledge of the horror of concurrent programming with locks, the STM system would be a good candidate.
Stefan
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to:http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
--
*Vanessa McHale* Functional Compiler Engineer | Chicago, IL
Website: www.iohk.io http://iohk.io Twitter: @vamchale PGP Key ID: 4209B7B5
[image: Input Output] http://iohk.io
[image: Twitter] https://twitter.com/InputOutputHK [image: Github] https://github.com/input-output-hk [image: LinkedIn] https://www.linkedin.com/company/input-output-global
This e-mail and any file transmitted with it are confidential and intended solely for the use of the recipient(s) to whom it is addressed. Dissemination, distribution, and/or copying of the transmission by anyone other than the intended recipient(s) is prohibited. If you have received this transmission in error please notify IOHK immediately and delete it from your system. E-mail transmissions cannot be guaranteed to be secure or error free. We do not accept liability for any loss, damage, or error arising from this transmission
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

Concurrency *can* be defined in Haskell proper (the denotative subset), but
not compatibly with that "source code" and probably not at all with the
current particulars of IO. With the source code in question, even the "real
world" cannot evolve concurrently with program execution, let alone
different threads with an IO computation.
On Wed, Jul 11, 2018 at 7:03 PM, Brandon Allbery
In the presence of concurrency, IO requires runtime support and can't be simply defined in Haskell proper.
On Wed, Jul 11, 2018 at 9:59 PM Vanessa McHale
wrote: I'm not sure I follow. Do you mean that IO is not a monad because equivalence of values cannot be defined? Or is it something deeper? On 07/11/2018 05:19 PM, Conal Elliott wrote:
The fact that you can define the IO monad in Haskell was quite a revelation.
But it's *not* a fact. It's a lie. And one of the most devious sort, since the source code appears to agree. The purported definition couldn't possibly explain concurrency.
On Wed, Jul 11, 2018 at 7:21 AM, Vanessa McHale
wrote: I find it quite elegant! The fact that you can define the IO monad in Haskell was quite a revelation. And it's especially nice when paired with a demonstration of C FFI (where you might *need* to sequence side effects such as freeing a value after it has been read).
newtype IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... = IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... (State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S... RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R... -> (# State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S... RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R..., a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... #))
On 07/11/2018 09:14 AM, Stefan Monnier wrote:
In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
I don't much like the monad solution for side-effects, but if those guys might have some knowledge of the horror of concurrent programming with locks, the STM system would be a good candidate.
Stefan
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to:http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
--
*Vanessa McHale* Functional Compiler Engineer | Chicago, IL
Website: www.iohk.io http://iohk.io Twitter: @vamchale PGP Key ID: 4209B7B5
[image: Input Output] http://iohk.io
[image: Twitter] https://twitter.com/InputOutputHK [image: Github] https://github.com/input-output-hk [image: LinkedIn] https://www.linkedin.com/company/input-output-global
This e-mail and any file transmitted with it are confidential and intended solely for the use of the recipient(s) to whom it is addressed. Dissemination, distribution, and/or copying of the transmission by anyone other than the intended recipient(s) is prohibited. If you have received this transmission in error please notify IOHK immediately and delete it from your system. E-mail transmissions cannot be guaranteed to be secure or error free. We do not accept liability for any loss, damage, or error arising from this transmission
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

In this conversation I didn't mean that IO is not a monad (a separate
topic), but rather that the "definition" of IO is incompatible with the
truth of IO. (It's perhaps akin to "the Ken Thompson hack"; see
http://wiki.c2.com/?TheKenThompsonHack.)
As for IO being a monad, I think the claim is not only not true but is
ill-defined and hence "not even false". For a well-defined claim/question,
one would need an agreed-upon notion of equality, since the Monad laws are
equalities.
(Of course there are *other* input-output-like types, perhaps subsets of
Haskell IO, for which we can define equality usefully, even based on a
denotation. But those types are not IO. Some related remarks at
http://conal.net/blog/posts/notions-of-purity-in-haskell#comment-442.)
-- Conal
On Wed, Jul 11, 2018 at 6:59 PM, Vanessa McHale
I'm not sure I follow. Do you mean that IO is not a monad because equivalence of values cannot be defined? Or is it something deeper? On 07/11/2018 05:19 PM, Conal Elliott wrote:
The fact that you can define the IO monad in Haskell was quite a revelation.
But it's *not* a fact. It's a lie. And one of the most devious sort, since the source code appears to agree. The purported definition couldn't possibly explain concurrency.
On Wed, Jul 11, 2018 at 7:21 AM, Vanessa McHale
wrote: I find it quite elegant! The fact that you can define the IO monad in Haskell was quite a revelation. And it's especially nice when paired with a demonstration of C FFI (where you might *need* to sequence side effects such as freeing a value after it has been read).
newtype IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... = IO http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... (State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S... RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R... -> (# State# http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#S... RealWorld http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Prim.html#R..., a http://hackage.haskell.org/package/ghc-prim-0.5.2.0/docs/src/GHC.Types.html#... #))
On 07/11/2018 09:14 AM, Stefan Monnier wrote:
In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
I don't much like the monad solution for side-effects, but if those guys might have some knowledge of the horror of concurrent programming with locks, the STM system would be a good candidate.
Stefan
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to:http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
--
*Vanessa McHale* Functional Compiler Engineer | Chicago, IL
Website: www.iohk.io http://iohk.io Twitter: @vamchale PGP Key ID: 4209B7B5
[image: Input Output] http://iohk.io
[image: Twitter] https://twitter.com/InputOutputHK [image: Github] https://github.com/input-output-hk [image: LinkedIn] https://www.linkedin.com/company/input-output-global
This e-mail and any file transmitted with it are confidential and intended solely for the use of the recipient(s) to whom it is addressed. Dissemination, distribution, and/or copying of the transmission by anyone other than the intended recipient(s) is prohibited. If you have received this transmission in error please notify IOHK immediately and delete it from your system. E-mail transmissions cannot be guaranteed to be secure or error free. We do not accept liability for any loss, damage, or error arising from this transmission
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

"SM" == Stefan Monnier
writes:
SM> I don't much like the monad solution for side-effects, but if those guys SM> might have some knowledge of the horror of concurrent programming with SM> locks, the STM system would be a good candidate. I would vote for STM too, especially when using retry within a logical block to indicate "try again when things might make more sense". When dealing with multiple variables, and queues that you're popping values from, that is a truly hard thing to say with traditional concurrent programming. -- John Wiegley GPG fingerprint = 4710 CF98 AF9B 327B B80F http://newartisans.com 60E1 46C4 BD1A 7AC1 4BA2

This is not necessarily related to Haskell, as I've had this A-HA moment while watching the 1984 SIPC lectures from MIT. Anyway, at some point, Mr Sussman (or was it Mr Abelson?) used `+` as an argument to another function. I was bedazzled! First of all, it was the syntactic novelty — `+` was not some rigid part of the syntax, it was just a name — secondly, it was not any name, it was the name of a *function* that was sent to another function. It was probably my first encounter with higher-order functions. If I remember correctly, the code was along the lines of: foldl (+) 0 [1,2,3] On 11/07/2018 15:10, Simon Peyton Jones via Haskell-Cafe wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: /why should I even care about Haskell/? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Ionuț G. Stan | http://igstan.ro | http://bucharestfp.ro

For me there were two important "aha" moments. Right at the beginning I was
drawn in by using ADTs and pattern-matching on them. It's so simple and
plain and now it's the first thing I miss in any other language I have to
work with. I feel like this would make a short, compelling example for
programmers coming from any other background.
The second was reading Wadler's "Monads for Functional Progamming" (and
reading it a second and third time, if we're being honest). The ways in
which he takes three seemingly disconnected examples and reduces them to
this broader mathematical abstraction: I found it quite beautiful and
surprising once I fully appreciated it.
On Wed, Jul 11, 2018 at 7:29 AM Ionuț G. Stan
This is not necessarily related to Haskell, as I've had this A-HA moment while watching the 1984 SIPC lectures from MIT.
Anyway, at some point, Mr Sussman (or was it Mr Abelson?) used `+` as an argument to another function. I was bedazzled!
First of all, it was the syntactic novelty — `+` was not some rigid part of the syntax, it was just a name — secondly, it was not any name, it was the name of a *function* that was sent to another function. It was probably my first encounter with higher-order functions.
If I remember correctly, the code was along the lines of:
foldl (+) 0 [1,2,3]
On 11/07/2018 15:10, Simon Peyton Jones via Haskell-Cafe wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: /why should I even care about Haskell/? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Ionuț G. Stan | http://igstan.ro | http://bucharestfp.ro _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Erik Aker

I think using laziness for dynamic programming is a pretty amazing thing:
http://jelv.is/blog/Lazy-Dynamic-Programming/
On Wed, Jul 11, 2018 at 10:37 AM erik
For me there were two important "aha" moments. Right at the beginning I was drawn in by using ADTs and pattern-matching on them. It's so simple and plain and now it's the first thing I miss in any other language I have to work with. I feel like this would make a short, compelling example for programmers coming from any other background.
The second was reading Wadler's "Monads for Functional Progamming" (and reading it a second and third time, if we're being honest). The ways in which he takes three seemingly disconnected examples and reduces them to this broader mathematical abstraction: I found it quite beautiful and surprising once I fully appreciated it.
On Wed, Jul 11, 2018 at 7:29 AM Ionuț G. Stan
wrote: This is not necessarily related to Haskell, as I've had this A-HA moment while watching the 1984 SIPC lectures from MIT.
Anyway, at some point, Mr Sussman (or was it Mr Abelson?) used `+` as an argument to another function. I was bedazzled!
First of all, it was the syntactic novelty — `+` was not some rigid part of the syntax, it was just a name — secondly, it was not any name, it was the name of a *function* that was sent to another function. It was probably my first encounter with higher-order functions.
If I remember correctly, the code was along the lines of:
foldl (+) 0 [1,2,3]
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer
On 11/07/2018 15:10, Simon Peyton Jones via Haskell-Cafe wrote: the
main question in their minds: /why should I even care about Haskell/? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Ionuț G. Stan | http://igstan.ro | http://bucharestfp.ro _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Erik Aker _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

For me there were two important "aha" moments. Right at the beginning I was drawn in by using ADTs and pattern-matching on them. It's so simple and plain and now it's the first thing I miss in any other language I have to work with. I feel like this would make a short, compelling example for programmers coming from any other background.
Indeed, the one thing I really miss when hacking on Elisp is the great help I get from my Haskell/ML typecheckers when I modify one of my datatypes, showing me exhaustively (or close enough) all the places where changes are needed. Stefan

I came to Haskell from C++, and I was used to the idea of parametric types and functions from C++ templates. However, what I really liked about Haskell's way of doing these was that, due to type inference, not only is there a lot less ceremony involved, but code is generic (parametric) *by default*, and thus much more reusable, although you still have the option of tightening it up by adding a type annotation (eg for performance reasons). For a while, I was writing all my C++ code as templates, but this ended up being a pain. Also, traditionally C++ has not allowed you to place any constraints on template arguments (type parameters) whereas Haskell's type classes are a very elegant way of doing it. (C++ now has concepts, but I haven't taken the time yet to see how they compare with type classes.) I was also used to function overloading, which is somewhat similar to type inference, in that the compiler will pick an implementation based on the types of a function's arguments. This is similar to Haskell picking an implementation from among type class instances. However, what blew me away is that Haskell can overload based on *return type* as well as argument types. I haven't seen any other production-ready language that can do this. A great example of how this is useful is the regex library, where you can select from among widely varying styles of regex matching result simply by type inference, ie without needing any type annotation. There were a *lot* of other things I found amazing, but others have covered many of these already. Also, languages are borrowing from each other at a rapid rate these days (eg Rust traits are equivalent to type classes) so it's hard to find a "killer feature" in Haskell any more (except laziness, perhaps). I think it's more the well-balanced combination of all the features that makes Haskell so pleasant to work in, and it's hard to demonstrate all of these in a single example. My own favourite "gem" is this code for computing all primes, based on code in a paper[1] by Doug McIlroy: primes = sieve [2..] where sieve (p : ns) = p : sieve [n | n <- ns, n `mod` p /= 0] I think care must be exercised, when using examples like this one, to avoid giving the impression that Haskell is a "toy" language. However, what I find interesting about this example is that all other sieve implementations I've seen work on a fixed size of sieve up front, and if you later change your mind about how many primes you want, eg because you're expanding a hash table and want a bigger prime for the size, you typically have to start the sieve from scratch again. [1]: http://www.cs.dartmouth.edu/~doug/sieve/sieve.pdf

Also, languages are borrowing from each other at a rapid rate these days (eg Rust traits are equivalent to type classes) so it's hard to find a "killer feature" in Haskell any more
That’s true, and to be celebrated!
One thing that stands our for me is the ability to abstract over type constructors:
f :: forall (m :: * -> *) (a :: *). Monad m => a -> m a
That ability is what has given rise to a stunningly huge collection of abstractions: not just Monad, but Functor, Applicative, Traversable, Foldable, etc etc etc. Really a lot. It opens up a new way to think about the world. But only made possible by that one feature. (Plus type classes of course.)
Do any statically typed languages other than Haskell and Scala do this?
Simon
From: Haskell-Cafe

Idris allows such abstractions in much the same way Haskell does. ATS allows monads via template (see e.g. https://github.com/githwxi/ATS-Postiats/blob/e83a467485857d568e20512b486ee52...) but they're kind of broken in practice in that you can have only instance per executable/library (!) On 07/11/2018 11:24 AM, Simon Peyton Jones via Haskell-Cafe wrote:
Also, languages are borrowing from each other at a rapid rate these days (eg Rust traits are equivalent to type classes) so it's hard to find a "killer feature" in Haskell any more
That’s true, and to be celebrated!
One thing that stands our for me is the ability to abstract over type *constructors*:
f :: forall (m :: * -> *) (a :: *). Monad m => a -> m a
That ability is what has given rise to a stunningly huge collection of abstractions: not just Monad, but Functor, Applicative, Traversable, Foldable, etc etc etc. Really a lot. It opens up a new way to think about the world. But only made possible by that one feature. (Plus type classes of course.)
Do any statically typed languages other than Haskell and Scala do this?
Simon
*From:*Haskell-Cafe
*On Behalf Of *Neil Mayhew *Sent:* 11 July 2018 17:12 *To:* Haskell Cafe *Subject:* Re: [Haskell-cafe] What is your favourite Haskell "aha" moment?
I came to Haskell from C++, and I was used to the idea of parametric types and functions from C++ templates.
However, what I really liked about Haskell's way of doing these was that, due to type inference, not only is there a lot less ceremony involved, but code is generic (parametric) *by default*, and thus much more reusable, although you still have the option of tightening it up by adding a type annotation (eg for performance reasons). For a while, I was writing all my C++ code as templates, but this ended up being a pain.
Also, traditionally C++ has not allowed you to place any constraints on template arguments (type parameters) whereas Haskell's type classes are a very elegant way of doing it. (C++ now has concepts, but I haven't taken the time yet to see how they compare with type classes.)
I was also used to function overloading, which is somewhat similar to type inference, in that the compiler will pick an implementation based on the types of a function's arguments. This is similar to Haskell picking an implementation from among type class instances. However, what blew me away is that Haskell can overload based on *return type* as well as argument types. I haven't seen any other production-ready language that can do this. A great example of how this is useful is the regex library, where you can select from among widely varying styles of regex matching result simply by type inference, ie without needing any type annotation.
There were a *lot* of other things I found amazing, but others have covered many of these already. Also, languages are borrowing from each other at a rapid rate these days (eg Rust traits are equivalent to type classes) so it's hard to find a "killer feature" in Haskell any more (except laziness, perhaps). I think it's more the well-balanced combination of all the features that makes Haskell so pleasant to work in, and it's hard to demonstrate all of these in a single example.
My own favourite "gem" is this code for computing all primes, based on code in a paper[1] by Doug McIlroy:
primes = sieve [2..] where sieve (p : ns) = p : sieve [n | n <- ns, n `mod` p /= 0]
I think care must be exercised, when using examples like this one, to avoid giving the impression that Haskell is a "toy" language. However, what I find interesting about this example is that all other sieve implementations I've seen work on a fixed size of sieve up front, and if you later change your mind about how many primes you want, eg because you're expanding a hash table and want a bigger prime for the size, you typically have to start the sieve from scratch again.
[1]: http://www.cs.dartmouth.edu/~doug/sieve/sieve.pdf https://na01.safelinks.protection.outlook.com/?url=http:%2F%2Fwww.cs.dartmouth.edu%2F%7Edoug%2Fsieve%2Fsieve.pdf&data=02%7C01%7Csimonpj%40microsoft.com%7Cce59cb95fe304d442b8d08d5e7490739%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636669223196857658&sdata=1UGE8hoL8B9yMjXqJbe5HV%2FsAXlpoB653kePOrgpCNY%3D&reserved=0
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Haskell is the only language I know of with typeclass coherence. On Wed, Jul 11, 2018, 12:25 PM Simon Peyton Jones via Haskell-Cafe < haskell-cafe@haskell.org> wrote:
Also, languages are borrowing from each other at a rapid rate these days (eg Rust traits are equivalent to type classes) so it's hard to find a "killer feature" in Haskell any more
That’s true, and to be celebrated!
One thing that stands our for me is the ability to abstract over type *constructors*:
f :: forall (m :: * -> *) (a :: *). Monad m => a -> m a
That ability is what has given rise to a stunningly huge collection of abstractions: not just Monad, but Functor, Applicative, Traversable, Foldable, etc etc etc. Really a lot. It opens up a new way to think about the world. But only made possible by that one feature. (Plus type classes of course.)
Do any statically typed languages other than Haskell and Scala do this?
Simon
*From:* Haskell-Cafe
*On Behalf Of *Neil Mayhew *Sent:* 11 July 2018 17:12 *To:* Haskell Cafe *Subject:* Re: [Haskell-cafe] What is your favourite Haskell "aha" moment? I came to Haskell from C++, and I was used to the idea of parametric types and functions from C++ templates.
However, what I really liked about Haskell's way of doing these was that, due to type inference, not only is there a lot less ceremony involved, but code is generic (parametric) *by default*, and thus much more reusable, although you still have the option of tightening it up by adding a type annotation (eg for performance reasons). For a while, I was writing all my C++ code as templates, but this ended up being a pain.
Also, traditionally C++ has not allowed you to place any constraints on template arguments (type parameters) whereas Haskell's type classes are a very elegant way of doing it. (C++ now has concepts, but I haven't taken the time yet to see how they compare with type classes.)
I was also used to function overloading, which is somewhat similar to type inference, in that the compiler will pick an implementation based on the types of a function's arguments. This is similar to Haskell picking an implementation from among type class instances. However, what blew me away is that Haskell can overload based on *return type* as well as argument types. I haven't seen any other production-ready language that can do this. A great example of how this is useful is the regex library, where you can select from among widely varying styles of regex matching result simply by type inference, ie without needing any type annotation.
There were a *lot* of other things I found amazing, but others have covered many of these already. Also, languages are borrowing from each other at a rapid rate these days (eg Rust traits are equivalent to type classes) so it's hard to find a "killer feature" in Haskell any more (except laziness, perhaps). I think it's more the well-balanced combination of all the features that makes Haskell so pleasant to work in, and it's hard to demonstrate all of these in a single example.
My own favourite "gem" is this code for computing all primes, based on code in a paper[1] by Doug McIlroy:
primes = sieve [2..] where sieve (p : ns) = p : sieve [n | n <- ns, n `mod` p /= 0]
I think care must be exercised, when using examples like this one, to avoid giving the impression that Haskell is a "toy" language. However, what I find interesting about this example is that all other sieve implementations I've seen work on a fixed size of sieve up front, and if you later change your mind about how many primes you want, eg because you're expanding a hash table and want a bigger prime for the size, you typically have to start the sieve from scratch again.
[1]: http://www.cs.dartmouth.edu/~doug/sieve/sieve.pdf https://na01.safelinks.protection.outlook.com/?url=http:%2F%2Fwww.cs.dartmouth.edu%2F~doug%2Fsieve%2Fsieve.pdf&data=02%7C01%7Csimonpj%40microsoft.com%7Cce59cb95fe304d442b8d08d5e7490739%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636669223196857658&sdata=1UGE8hoL8B9yMjXqJbe5HV%2FsAXlpoB653kePOrgpCNY%3D&reserved=0 _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 11.07.2018 um 18:24 schrieb Simon Peyton Jones via Haskell-Cafe:
One thing that stands our for me is the ability to abstract over type *constructors*:
f :: forall (m :: * -> *) (a :: *). Monad m => a -> m a
That ability is what has given rise to a stunningly huge collection of abstractions: not just Monad, but Functor, Applicative, Traversable, Foldable, etc etc etc. Really a lot. It opens up a new way to think about the world. But only made possible by that one feature. (Plus type classes of course.)
Do any statically typed languages other than Haskell and Scala do this?
Not this, but this description reminded me of my own A-Ha moment when I looked at Eiffel's data structure library. Eiffel does multiple inheritance pretty well, so they went ahead and structured the library using a classifier approach: bounded vs. unbounded data structures, updatable vs. non-updatable ones, indexable vs. merely iterable. Any concrete class is a subtype of any of these, and of course the classifying types had subtypes that defined more detail, e.g. set vs. multiset (bag) vs. map. This created an extraordinarily uniform API where equal things had equal names and consistent semantics, something that rare even for well-designed libraries. (The library does have its weaknesses, which are due to the designer's aggressively update-in-place mindset, so in a discussion it's probably best to avoid mentioning Eiffel if you wish to highlight FP.)

[Shameless promotion] I once wrote a parser combinator library. Something everyone does once in his life ;-} Then I realised that I could run parsers in an interleaved way, and wrote the package uu-interleaved”. In remarkably few lines of code this can turn a parser combinator library in a a library that can run parsers in an interleavedway . If you look at the code in: http://hackage.haskell.org/package/uu-interleaved-0.2.0.1/docs/src/Control.A... this boils down to writing a few instances for a new data type. Although the code is intricate it is very short, and the types guided me get the code correct. Without te types I would have spent ages in getting things to work. The I realised that by adding just a few extra lines of code this could be turned in a package for writing code for dealing with command line arguments in a very broad sense: repeating arguments, dealing with missing obligatory arguments, optional arguments and parsing the arguments according to what they stand for and reporting errors in the command line in a systematic way Surprisingly the the code of this package is probably less that what an ordinary program spends on processing it's command line arguments, while providing much larger security. [End of shameless promotion] If your audience consists of experienced programmers they must have been spending quite some time on code that is no longer necessary when using such a package. Doaitse
Op 11 jul. 2018, om 14:10 heeft Simon Peyton Jones via Haskell-Cafe
het volgende geschreven: Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: why should I even care about Haskell? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time. The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

We make extensive use of Servant for our web services at Plow Technologies. It is my favorite example of the success of using types (even relatively fancy types) to help with a common problem (creating web sites with web services) and reduce boilerplate. I like the examples in the tutorial https://haskell-servant.readthedocs.io/en/stable/tutorial/index.html. Scott Fleischman On Wed, Jul 11, 2018 at 5:10 AM, Simon Peyton Jones via Haskell-Cafe < haskell-cafe@haskell.org> wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

I feel like the one theme that's been missing in all of this is the interaction between equational reasoning and rewrite rules. Examples of fusing operations on Text or ByteString were pretty impressive to me. The ideas may be incorporated into other languages, but I believe Haskell is pretty unique, at least versus mainstream languages, in doing the fusion in the optimizer where there's still opportunity for the results to be inlined and further optimized. I don't have a complete example off the top of my head. On Wed, Jul 11, 2018 at 3:31 PM Scott Fleischman < scott.fleischman@plowtech.net> wrote:
We make extensive use of Servant for our web services at Plow Technologies. It is my favorite example of the success of using types (even relatively fancy types) to help with a common problem (creating web sites with web services) and reduce boilerplate.
I like the examples in the tutorial https://haskell-servant.readthedocs.io/en/stable/tutorial/index.html.
Scott Fleischman
On Wed, Jul 11, 2018 at 5:10 AM, Simon Peyton Jones via Haskell-Cafe < haskell-cafe@haskell.org> wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

"Blow your mind" on the Haskell Wiki[1] is a collection of short bits of
awesome code.
[1] https://wiki.haskell.org/Blow_your_mind
On Wed, Jul 11, 2018 at 2:40 PM Chris Smith
I feel like the one theme that's been missing in all of this is the interaction between equational reasoning and rewrite rules. Examples of fusing operations on Text or ByteString were pretty impressive to me. The ideas may be incorporated into other languages, but I believe Haskell is pretty unique, at least versus mainstream languages, in doing the fusion in the optimizer where there's still opportunity for the results to be inlined and further optimized.
I don't have a complete example off the top of my head.
On Wed, Jul 11, 2018 at 3:31 PM Scott Fleischman < scott.fleischman@plowtech.net> wrote:
We make extensive use of Servant for our web services at Plow Technologies. It is my favorite example of the success of using types (even relatively fancy types) to help with a common problem (creating web sites with web services) and reduce boilerplate.
I like the examples in the tutorial https://haskell-servant.readthedocs.io/en/stable/tutorial/index.html.
Scott Fleischman
On Wed, Jul 11, 2018 at 5:10 AM, Simon Peyton Jones via Haskell-Cafe < haskell-cafe@haskell.org> wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Jeff Brown | Jeffrey Benjamin Brown Website https://msu.edu/~brown202/ | Facebook https://www.facebook.com/mejeff.younotjeff | LinkedIn https://www.linkedin.com/in/jeffreybenjaminbrown(spammy, so I often miss messages here) | Github https://github.com/jeffreybenjaminbrown

Gabriel Gonzalez's "Haskell for Everyone" blog is a wellspring of great
candidates. This one came up in another thread yesterday, as an example of
what you can get from treating IO actions as data:
http://www.haskellforall.com/2018/02/the-wizard-monoid.html
On Wed, Jul 11, 2018, 12:51 PM Jeffrey Brown
"Blow your mind" on the Haskell Wiki[1] is a collection of short bits of awesome code.
[1] https://wiki.haskell.org/Blow_your_mind
On Wed, Jul 11, 2018 at 2:40 PM Chris Smith
wrote: I feel like the one theme that's been missing in all of this is the interaction between equational reasoning and rewrite rules. Examples of fusing operations on Text or ByteString were pretty impressive to me. The ideas may be incorporated into other languages, but I believe Haskell is pretty unique, at least versus mainstream languages, in doing the fusion in the optimizer where there's still opportunity for the results to be inlined and further optimized.
I don't have a complete example off the top of my head.
On Wed, Jul 11, 2018 at 3:31 PM Scott Fleischman < scott.fleischman@plowtech.net> wrote:
We make extensive use of Servant for our web services at Plow Technologies. It is my favorite example of the success of using types (even relatively fancy types) to help with a common problem (creating web sites with web services) and reduce boilerplate.
I like the examples in the tutorial https://haskell-servant.readthedocs.io/en/stable/tutorial/index.html.
Scott Fleischman
On Wed, Jul 11, 2018 at 5:10 AM, Simon Peyton Jones via Haskell-Cafe < haskell-cafe@haskell.org> wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Jeff Brown | Jeffrey Benjamin Brown Website https://msu.edu/~brown202/ | Facebook https://www.facebook.com/mejeff.younotjeff | LinkedIn https://www.linkedin.com/in/jeffreybenjaminbrown(spammy, so I often miss messages here) | Github https://github.com/jeffreybenjaminbrown
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On Wed, 11 Jul 2018, Simon Peyton Jones
Friends
In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists.
I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
The compiler enforces comments. The type system is the mechanism of this enforceement. oo--JS.
So I thought I'd ask you for help. War stories perhaps - how using Haskell worked (or didn't) for you. But rather than talk generalities, I'd love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an "aha" moment, or that feeling of joy when you truly make sense of something for the first time. The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn't going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup. To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It's so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes. Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes. But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done. Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.) Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience. Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me. Simon

Am 11.07.2018 um 14:10 schrieb Simon Peyton Jones via Haskell-Cafe:
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument;
You can expound more on that: People don't have to write loops anymore (with their fencepost issues etc.). It's like a "for" loop, but written using in-language facilities instead of having to live with what the language designer does. This means you can roll your own loop constructs. Consider iterating over the elements of a data structure such as List, Tree, etc. Most languages offer an Iterable interface, which tends to be somewhat messy to implement. With a higher-order function, you can just write down the loop, *once*, and the loop body will be provided by a function parameter, and voilà! you have your loop construct. HTH Jo

Although I don't program regularly in Haskell these days (my poison is Python, mainly for Web framework support), I do occasionally find myself coding tricky manipulations in Haskell first as I find it easier to concentrate on the essentials of an algorithm. Once I have the Haskell code written and tested, I generally find it fairly easy to map the algorithm into Python (using higher order functions as appropriate). Here are some examples: https://github.com/gklyne/annalist/blob/master/spike/rearrange-list/move_up.... https://github.com/gklyne/annalist/blob/master/spike/tree-scan/tree_scan.lhs And the corresponding code in the actual application: https://github.com/gklyne/annalist/blob/4d21250a3457c72d4f6525e5a4fac40d4c0c... https://github.com/gklyne/annalist/blob/master/src/annalist_root/annalist/mo... #g -- On 11/07/2018 13:10, Simon Peyton Jones via Haskell-Cafe wrote:
Friends In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
So I thought I'd ask you for help. War stories perhaps - how using Haskell worked (or didn't) for you. But rather than talk generalities, I'd love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an "aha" moment, or that feeling of joy when you truly make sense of something for the first time. The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn't going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup. To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It's so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes. Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes. But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done. Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.) Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience. Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me. Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Python is poison, indeed. ;) Brett Gilio brettg@posteo.net | bmg@member.fsf.org Free Software -- Free Society! On 07/12/2018 04:56 AM, Graham Klyne wrote:
Although I don't program regularly in Haskell these days (my poison is Python, mainly for Web framework support), I do occasionally find myself coding tricky manipulations in Haskell first as I find it easier to concentrate on the essentials of an algorithm. Once I have the Haskell code written and tested, I generally find it fairly easy to map the algorithm into Python (using higher order functions as appropriate).
Here are some examples:
https://github.com/gklyne/annalist/blob/master/spike/rearrange-list/move_up....
https://github.com/gklyne/annalist/blob/master/spike/tree-scan/tree_scan.lhs
And the corresponding code in the actual application:
https://github.com/gklyne/annalist/blob/4d21250a3457c72d4f6525e5a4fac40d4c0c...
https://github.com/gklyne/annalist/blob/master/src/annalist_root/annalist/mo...
#g --
On 11/07/2018 13:10, Simon Peyton Jones via Haskell-Cafe wrote:
Friends In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
So I thought I'd ask you for help. War stories perhaps - how using Haskell worked (or didn't) for you. But rather than talk generalities, I'd love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an "aha" moment, or that feeling of joy when you truly make sense of something for the first time. The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn't going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup. To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It's so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes. Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes. But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done. Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.) Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience. Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me. Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On python, we are fixing it (WIP): https://github.com/qfpl/hpython Did you see the recent "default mutable arguments" post? https://lethain.com/digg-v4/ Here is a Haskell program that transforms any python program that uses default mutable arguments: https://github.com/qfpl/hpython/blob/master/example/FixMutableDefaultArgumen... Someone once said that python will never have tail call optimisation. Here is the Haskell program that transforms any Python (direct only) tail call into a loop: https://github.com/qfpl/hpython/blob/master/example/OptimizeTailRecursion.hs Two python programmers once argued over preferred indentation levels. Let's put their disagreement into a Haskell function so that they can stop arguing: https://github.com/qfpl/hpython/blob/master/example/Indentation.hs On 07/12/2018 07:58 PM, Brett Gilio wrote:
Python is poison, indeed. ;)
Brett Gilio brettg@posteo.net | bmg@member.fsf.org Free Software -- Free Society!
On 07/12/2018 04:56 AM, Graham Klyne wrote:
Although I don't program regularly in Haskell these days (my poison is Python, mainly for Web framework support), I do occasionally find myself coding tricky manipulations in Haskell first as I find it easier to concentrate on the essentials of an algorithm. Once I have the Haskell code written and tested, I generally find it fairly easy to map the algorithm into Python (using higher order functions as appropriate).
Here are some examples:
https://github.com/gklyne/annalist/blob/master/spike/rearrange-list/move_up....
https://github.com/gklyne/annalist/blob/master/spike/tree-scan/tree_scan.lhs
And the corresponding code in the actual application:
https://github.com/gklyne/annalist/blob/4d21250a3457c72d4f6525e5a4fac40d4c0c...
https://github.com/gklyne/annalist/blob/master/src/annalist_root/annalist/mo...
#g --
On 11/07/2018 13:10, Simon Peyton Jones via Haskell-Cafe wrote:
Friends In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
So I thought I'd ask you for help. War stories perhaps - how using Haskell worked (or didn't) for you. But rather than talk generalities, I'd love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an "aha" moment, or that feeling of joy when you truly make sense of something for the first time. The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn't going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup. To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It's so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes. Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes. But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done. Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.) Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience. Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me. Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 13.07.2018 um 01:40 schrieb Tony Morris:
On python, we are fixing it (WIP): https://github.com/qfpl/hpython
Some poisonous aspects of Python are unfixable. E.g. having declarations as program-visible update to a global state causes all kinds of unnecessary pain, such as having to deal with half-initialized peer modules during module initialization.
Did you see the recent "default mutable arguments" post? https://lethain.com/digg-v4/
Yeah, the "default parameter is an empty list but that's then going to be updated across invocations" classic. Other languages have made the same mistake. Default data needs to be either immutable or recreated on each call. Such mistakes can be mitigated, but they cannot be fixed while staying backwards-compatible. Python actually did an incompatible switch from 2 to 3, but for some reason Guido didn't fix the above two, or some other things (such as Python's broken idea of multiple inheritance, or the horribly overcomplicated way annotations work). Python is nice for small tasks. If you accept its invitation to extend it you get fragility (due to the inability to define hard constraints so whatever you extend will violate *somebody's* assumptions), and if you scale to large systems you get fragility as well (because Python modularization is too weak). Well, enough of the off-topic thing here.
On 07/12/2018 07:58 PM, Brett Gilio wrote:
Python is poison, indeed. ;)

I am not fixing python. I am fixing the consequences of the fact that it exists. On 07/13/2018 04:47 PM, Joachim Durchholz wrote:
Am 13.07.2018 um 01:40 schrieb Tony Morris:
On python, we are fixing it (WIP): https://github.com/qfpl/hpython
Some poisonous aspects of Python are unfixable. E.g. having declarations as program-visible update to a global state causes all kinds of unnecessary pain, such as having to deal with half-initialized peer modules during module initialization.
Did you see the recent "default mutable arguments" post? https://lethain.com/digg-v4/
Yeah, the "default parameter is an empty list but that's then going to be updated across invocations" classic. Other languages have made the same mistake. Default data needs to be either immutable or recreated on each call. Such mistakes can be mitigated, but they cannot be fixed while staying backwards-compatible.
Python actually did an incompatible switch from 2 to 3, but for some reason Guido didn't fix the above two, or some other things (such as Python's broken idea of multiple inheritance, or the horribly overcomplicated way annotations work). Python is nice for small tasks. If you accept its invitation to extend it you get fragility (due to the inability to define hard constraints so whatever you extend will violate *somebody's* assumptions), and if you scale to large systems you get fragility as well (because Python modularization is too weak).
Well, enough of the off-topic thing here.
On 07/12/2018 07:58 PM, Brett Gilio wrote:
Python is poison, indeed. ;)
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

13.07.2018 09:47, Joachim Durchholz wrote:
Am 13.07.2018 um 01:40 schrieb Tony Morris:
On python, we are fixing it (WIP): https://github.com/qfpl/hpython
Some poisonous aspects of Python are unfixable. E.g. having declarations as program-visible update to a global state causes all kinds of unnecessary pain, such as having to deal with half-initialized peer modules during module initialization.
I never understand that point. Mostly Python programs have not global state, only global constants. You can have "application" monad in Haskell (read), the same in Python: most Python programmers create some Application class. Also OOP has long been solved "global state" problem, for example, Smalltalk (similar idea you can find in Erlang): state is hidden behind some FSM which is represented as class (process in Erlang). You can not change state anywhere, you can only send message. FSM guards you from any errors. Even more, FSM-style is super safe and robust: you can visualize its behavior, to develop it very quickly, to debug it easy, final code usually does not contain errors even... Even I can imagine read monad with big record inside with a lot of fields, flags, etc and to get absolutely the same spaghetti code with flags manipulation and testing in Haskell, but under monad ;)
Did you see the recent "default mutable arguments" post? https://lethain.com/digg-v4/
Yeah, the "default parameter is an empty list but that's then going to be updated across invocations" classic. Other languages have made the same mistake. Default data needs to be either immutable or recreated on each call. Such mistakes can be mitigated, but they cannot be fixed while staying backwards-compatible.
Python actually did an incompatible switch from 2 to 3, but for some reason Guido didn't fix the above two, or some other things (such as Python's broken idea of multiple inheritance, or the horribly overcomplicated way annotations work). Python is nice for small tasks. If you accept its invitation to extend it you get fragility (due to the inability to define hard constraints so whatever you extend will violate *somebody's* assumptions), and if you scale to large systems you get fragility as well (because Python modularization is too weak).
Interesting here is that there are a big enterprise products written in Python and they are very useful: https://www.codeinstitute.net/blog/7-popular-software-programs-written-in-py...
Well, enough of the off-topic thing here.
On 07/12/2018 07:58 PM, Brett Gilio wrote:
Python is poison, indeed. ;)
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Hallo, On 13/07/18 09:54, PY wrote:
Application class. Also OOP has long been solved "global state" problem, https://en.wikipedia.org/wiki/Singleton_pattern
Cheers, -- -alex https://unendli.ch/

Am 13.07.2018 um 09:54 schrieb PY:
13.07.2018 09:47, Joachim Durchholz wrote:
Am 13.07.2018 um 01:40 schrieb Tony Morris:
On python, we are fixing it (WIP): https://github.com/qfpl/hpython
Some poisonous aspects of Python are unfixable. E.g. having declarations as program-visible update to a global state causes all kinds of unnecessary pain, such as having to deal with half-initialized peer modules during module initialization.
I never understand that point. Mostly Python programs have not global state, only global constants.
Well, try doing work on SymPy then. I did. And I know for a fact that your idea is completely wrong; Python has a lot of global state, first and foremost all its modules and the data that lives inside them.
Also OOP has long been solved "global state" problem, for example, Smalltalk (similar idea you can find in Erlang): state is hidden behind some FSM which is represented as class (process in Erlang). No amount of TLAs will solve the issues with mutable global state.
Actually the Digg story was having mutable global state, in the form of a default parameter. Now that's insiduous.
You can not change state anywhere, you can only send message. If the responses depend on the history of previous message sends, you have mutable state back, just by another name.
FSM guards you from any errors. Even more, FSM-style is super safe and robust: you can visualize its behavior, to develop it very quickly, to debug it easy, final code usually does not contain errors even...
This all holds only for trivial FSMs. Once the FSM holds more than a dozen states, these advantages evaporate.
Even I can imagine read monad with big record inside with a lot of fields, flags, etc and to get absolutely the same spaghetti code with flags manipulation and testing in Haskell, but under monad ;)
"Real programmers can write FORTRAN code in any language." Sure. Except that it's not idiomatic.
Interesting here is that there are a big enterprise products written in Python and they are very useful: https://www.codeinstitute.net/blog/7-popular-software-programs-written-in-py...
Sure. Except they do not use some features, such as lists in default parameters, or face the consequences (as the Digg story demonstrates). Which means that this feature is conceptually broken. Except Python can't fix it by mandating that default parameters need to be read-only, which is impossible because Python has no hard-and-fast way to mark read-only objects as such. (Read-only means: ALL function calls must ALWAYS return the same results given equal arguments. With additional constraints on argument behaviour vs. constness, but that's a pretty big can of worm to define properly.) Also, such companies use extra-lingual process to get the modularization under control. Such as conventions, patterns, style guidelines - and woe to the company where a programmer accidentally broke them. Finally, that "but large software IS successfully written in unsuitable languages" argument never held water. Entire operating systems have been written in assembly language, and are still being written in C. Which is a horrible misdecision from a reliability perspective, but it's being done, and the inevitable security holes are being duct-taped to keep the infrastructure limping along. Which the large companies are doing, too. They just make enough money to keep that infrastructure going. The same goes for Linux BTW, they have enough paid^Wsponsored engineers to solve all the problems they're having. The Haskell community does not have the luxury of excess engineers that can hold all the ripped-up steel together with duct tape.

I agreed with most of your the arguments. Yes, Python has serious problem with modules locking mechanism in multi-process apps. And GIL. But it’s trade-off. And also not all Python implementations have GIL.
"Real programmers can write FORTRAN code in any language." The same with my example of read monad usage 😉 No problem to avoid manipulation of global state in a way which leads to spaghetti-code.
Once the FSM holds more than a dozen states, these advantages evaporate. This is point only where I can not agree. I used FSM with hundreds states/transitions. It was automatically generated, I only check them. Also I know that in car automatics FSM are widely used (BMW, Mercedes, Audi). Also it’s using in software for space industry widely. My IMHO is: FSM is most reliable way to do soft without bugs. Also it’s easy to verify them (for example, with transitions’ assertions)
From: Joachim Durchholz Sent: 13 июля 2018 г. 22:08 To: haskell-cafe@haskell.org Subject: Re: [Haskell-cafe] What is your favourite Haskell "aha" moment? Am 13.07.2018 um 09:54 schrieb PY:
13.07.2018 09:47, Joachim Durchholz wrote:
Am 13.07.2018 um 01:40 schrieb Tony Morris:
On python, we are fixing it (WIP): https://github.com/qfpl/hpython
Some poisonous aspects of Python are unfixable. E.g. having declarations as program-visible update to a global state causes all kinds of unnecessary pain, such as having to deal with half-initialized peer modules during module initialization.
I never understand that point. Mostly Python programs have not global state, only global constants.
Well, try doing work on SymPy then. I did. And I know for a fact that your idea is completely wrong; Python has a lot of global state, first and foremost all its modules and the data that lives inside them.
Also OOP has long been solved "global state" problem, for example, Smalltalk (similar idea you can find in Erlang): state is hidden behind some FSM which is represented as class (process in Erlang). No amount of TLAs will solve the issues with mutable global state.
Actually the Digg story was having mutable global state, in the form of a default parameter. Now that's insiduous.
You can not change state anywhere, you can only send message. If the responses depend on the history of previous message sends, you have mutable state back, just by another name.
FSM guards you from any errors. Even more, FSM-style is super safe and robust: you can visualize its behavior, to develop it very quickly, to debug it easy, final code usually does not contain errors even...
This all holds only for trivial FSMs. Once the FSM holds more than a dozen states, these advantages evaporate.
Even I can imagine read monad with big record inside with a lot of fields, flags, etc and to get absolutely the same spaghetti code with flags manipulation and testing in Haskell, but under monad ;)
"Real programmers can write FORTRAN code in any language." Sure. Except that it's not idiomatic.
Interesting here is that there are a big enterprise products written in Python and they are very useful: https://www.codeinstitute.net/blog/7-popular-software-programs-written-in-py...
Sure. Except they do not use some features, such as lists in default parameters, or face the consequences (as the Digg story demonstrates). Which means that this feature is conceptually broken. Except Python can't fix it by mandating that default parameters need to be read-only, which is impossible because Python has no hard-and-fast way to mark read-only objects as such. (Read-only means: ALL function calls must ALWAYS return the same results given equal arguments. With additional constraints on argument behaviour vs. constness, but that's a pretty big can of worm to define properly.) Also, such companies use extra-lingual process to get the modularization under control. Such as conventions, patterns, style guidelines - and woe to the company where a programmer accidentally broke them. Finally, that "but large software IS successfully written in unsuitable languages" argument never held water. Entire operating systems have been written in assembly language, and are still being written in C. Which is a horrible misdecision from a reliability perspective, but it's being done, and the inevitable security holes are being duct-taped to keep the infrastructure limping along. Which the large companies are doing, too. They just make enough money to keep that infrastructure going. The same goes for Linux BTW, they have enough paid^Wsponsored engineers to solve all the problems they're having. The Haskell community does not have the luxury of excess engineers that can hold all the ripped-up steel together with duct tape. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On Sat, Jul 14, 2018 at 10:17:43AM +0300, Paul wrote:
Once the FSM holds more than a dozen states, these advantages evaporate.
This is point only where I can not agree. I used FSM with hundreds states/transitions. It was automatically generated, I only check them. Also I know that in car automatics FSM are widely used (BMW, Mercedes, Audi). Also it’s using in software for space industry widely. My IMHO is: FSM is most reliable way to do soft without bugs. Also it’s easy to verify them (for example, with transitions’ assertions)
It's interesting to see all this chat about FSMs, when FSMs are essentially "just" a tail recursive function on a sum type.

16.07.2018 09:44, Tom Ellis wrote:
On Sat, Jul 14, 2018 at 10:17:43AM +0300, Paul wrote:
Once the FSM holds more than a dozen states, these advantages evaporate. This is point only where I can not agree. I used FSM with hundreds states/transitions. It was automatically generated, I only check them. Also I know that in car automatics FSM are widely used (BMW, Mercedes, Audi). Also it’s using in software for space industry widely. My IMHO is: FSM is most reliable way to do soft without bugs. Also it’s easy to verify them (for example, with transitions’ assertions) It's interesting to see all this chat about FSMs, when FSMs are essentially "just" a tail recursive function on a sum type. Yes :) But more good is to represent FSM as table or diagram - then you can easy find right/wrong transitions/states. Any information can be represented in different forms but only some of them are good for human ;)
Btw, there are a lot of visual tools to work with FSMs, to develop them and tests as well as to translate them to some language.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

13.07.2018 02:40, Tony Morris wrote:
On python, we are fixing it (WIP): https://github.com/qfpl/hpython
Did you see the recent "default mutable arguments" post? https://lethain.com/digg-v4/
Here is a Haskell program that transforms any python program that uses default mutable arguments: https://github.com/qfpl/hpython/blob/master/example/FixMutableDefaultArgumen...
Someone once said that python will never have tail call optimisation. Here is the Haskell program that transforms any Python (direct only) tail call into a loop: https://github.com/qfpl/hpython/blob/master/example/OptimizeTailRecursion.hs
Two python programmers once argued over preferred indentation levels. Let's put their disagreement into a Haskell function so that they can stop arguing: https://github.com/qfpl/hpython/blob/master/example/Indentation.hs
IMHO will be good to port such successful "experiments" to PyPy project in some way, if it's possible
On 07/12/2018 07:58 PM, Brett Gilio wrote:
Python is poison, indeed. ;)
Brett Gilio brettg@posteo.net | bmg@member.fsf.org Free Software -- Free Society!
On 07/12/2018 04:56 AM, Graham Klyne wrote:
Although I don't program regularly in Haskell these days (my poison is Python, mainly for Web framework support), I do occasionally find myself coding tricky manipulations in Haskell first as I find it easier to concentrate on the essentials of an algorithm. Once I have the Haskell code written and tested, I generally find it fairly easy to map the algorithm into Python (using higher order functions as appropriate).
Here are some examples:
https://github.com/gklyne/annalist/blob/master/spike/rearrange-list/move_up....
https://github.com/gklyne/annalist/blob/master/spike/tree-scan/tree_scan.lhs
And the corresponding code in the actual application:
https://github.com/gklyne/annalist/blob/4d21250a3457c72d4f6525e5a4fac40d4c0c...
https://github.com/gklyne/annalist/blob/master/src/annalist_root/annalist/mo...
#g --
On 11/07/2018 13:10, Simon Peyton Jones via Haskell-Cafe wrote:
Friends In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
So I thought I'd ask you for help. War stories perhaps - how using Haskell worked (or didn't) for you. But rather than talk generalities, I'd love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an "aha" moment, or that feeling of joy when you truly make sense of something for the first time. The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn't going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup. To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It's so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes. Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes. But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done. Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.) Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience. Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me. Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Another example of Haskell-influenced programming in another language (this time Jacascript, before promises and event driven frameworks were a thing) - I implemented a class somewhat based on Hasell's Monad class to organize callback sequencing in an jQuery-based UI. The (aged) code is here: https://github.com/gklyne/shuffl/blob/master/src/AsyncComputation.js and an example of use: https://github.com/gklyne/shuffl/blob/master/src/shuffl-loadworkspace.js#L18... #g -- On 11/07/2018 13:10, Simon Peyton Jones via Haskell-Cafe wrote:
Friends In a few weeks I'm giving a talk to a bunch of genomics folk at the Sanger Institutehttps://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren't computer scientists. I can tell them plenty about Haskell, but I'm ill-equipped to answer the main question in their minds: why should I even care about Haskell? I'm too much of a biased witness.
So I thought I'd ask you for help. War stories perhaps - how using Haskell worked (or didn't) for you. But rather than talk generalities, I'd love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an "aha" moment, or that feeling of joy when you truly make sense of something for the first time. The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn't going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup. To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It's so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes. Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes. But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done. Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.) Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience. Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me. Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Another thing worth mentioning is the following: An enormous amount of programmer time is spent on managing memory and scheduling computations. 1) Automatic garbage collection has freed us from de necessity to think about WHEN THE LIFE OF A VALUE ENDS 2) Lazy evaluation frees of having to think about WHEN THE LIFE OF A VALUE STARTS. So a lazy purely functional language like Haskell frees the programmer from having to think about scheduling computations. Just as you can make garbage collection explicit in your code by using assignments (making explicit that you do not need the value stored in the variable anymore) you can make evaluation in your program explicit by making arguments strict and using `seq` etc. Both these make life more complicated in the first place, although they may lead to faster code taking less memory, but they are optimisations that only should be applied when unavoidable. Doaitse
Op 11 jul. 2018, om 14:10 heeft Simon Peyton Jones via Haskell-Cafe
het volgende geschreven: Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: why should I even care about Haskell? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time. The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Not sure if it counts as "aha moments", but when I started with Haskell I had two major reasons (not in any importance order): 1. The ability to define the specification (types) and then "just" follow them in implementation. Sometimes even without having a clear understanding of the things I was using, I felt (and still feel) guided towards the right solution. 2. The ability to refactor fearlessly is a _massive_ productivity boost. Hard to underestimate. Regards, Alexey. On Wed, Jul 11, 2018 at 10:10 PM Simon Peyton Jones via Haskell-Cafe < haskell-cafe@haskell.org> wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Alexey, could you expand on what you mean in your first point? I am quite intrigued. I do not use Haskell often, but that could be something of interest to me in-and-out of Haskell. Brett Gilio brettg@posteo.net | bmg@member.fsf.org Free Software -- Free Society! On 07/12/2018 07:46 AM, Alexey Raga wrote:
Not sure if it counts as "aha moments", but when I started with Haskell I had two major reasons (not in any importance order):
1. The ability to define the specification (types) and then "just" follow them in implementation. Sometimes even without having a clear understanding of the things I was using, I felt (and still feel) guided towards the right solution.
2. The ability to refactor fearlessly is a _massive_ productivity boost. Hard to underestimate.
Regards, Alexey.
On Wed, Jul 11, 2018 at 10:10 PM Simon Peyton Jones via Haskell-Cafe
mailto:haskell-cafe@haskell.org> wrote: Friends____
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.____
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: /why should I even care about Haskell/? I’m too much of a biased witness.
____
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code. ____
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.____
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.____
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.____
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.____
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.____
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)____
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.____
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.____
Simon____
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Op 12 jul. 2018, om 15:01 heeft Brett Gilio
het volgende geschreven: Alexey, could you expand on what you mean in your first point? I am quite intrigued. I do not use Haskell often, but that could be something of interest to me in-and-out of Haskell.
In the old days, when I wrote Pascal programs, and I wanted to swap values I wrote code like: function swap (var x, y: integer); begin x := x+y; y := x - y; x := x - y end; Only to find out that if I wanted a swap for a different type this does not work. Hence a lot of coding, and hardly re-use and very error prone. How many functions can’t you write that have this type. So the question is how many functions can you write with the type (a, a) -> (a, a). If you randomly generate functions of this type there is a chance of 25% you get the right one. But things become even better: swap (a, b) = (b, a) Once you ask for the type you get (a, b) -> (b, a), hence the type completely specifies what swap computes, and the function is even more general than the version of the type above. Doaitse
Brett Gilio brettg@posteo.net | bmg@member.fsf.org Free Software -- Free Society!
On 07/12/2018 07:46 AM, Alexey Raga wrote:
Not sure if it counts as "aha moments", but when I started with Haskell I had two major reasons (not in any importance order): 1. The ability to define the specification (types) and then "just" follow them in implementation. Sometimes even without having a clear understanding of the things I was using, I felt (and still feel) guided towards the right solution. 2. The ability to refactor fearlessly is a _massive_ productivity boost. Hard to underestimate. Regards, Alexey. On Wed, Jul 11, 2018 at 10:10 PM Simon Peyton Jones via Haskell-Cafe
mailto:haskell-cafe@haskell.org> wrote: Friends____ In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.____ I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: /why should I even care about Haskell/? I’m too much of a biased witness. ____ So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code. ____ * Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.____ The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.____ To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.____ Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.____ But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.____ Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)____ Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.____ Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.____ Simon____ _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Alexey, could you expand on what you mean in your first point?
I guess that I meant two things here.
First is that when I wrote a signature for my function, the compiler will
make its best to help me implement it. It will yell at me, it will not let
me use things that I am not supposed to use (according to constraints),
etc.
More precise I am with my types (e.g. use non-empty list instead of just
list, use specific ADT instead of Bools, use Age/Weight/Size instead of
Int, etc.) - more help I get.
Another thing is that sometimes I'd just play "Type Tetris" to make things
compile and work. Try something, the compiler says "No, can't have this",
perhaps make a suggestion, try another thing, "aha, next step", etc.
Learned so much from these "games" :)
Regards,
Alexey.
On Thu, Jul 12, 2018 at 11:04 PM Brett Gilio
Alexey, could you expand on what you mean in your first point? I am quite intrigued. I do not use Haskell often, but that could be something of interest to me in-and-out of Haskell.
Brett Gilio brettg@posteo.net | bmg@member.fsf.org Free Software -- Free Society!
On 07/12/2018 07:46 AM, Alexey Raga wrote:
Not sure if it counts as "aha moments", but when I started with Haskell I had two major reasons (not in any importance order):
1. The ability to define the specification (types) and then "just" follow them in implementation. Sometimes even without having a clear understanding of the things I was using, I felt (and still feel) guided towards the right solution.
2. The ability to refactor fearlessly is a _massive_ productivity boost. Hard to underestimate.
Regards, Alexey.
On Wed, Jul 11, 2018 at 10:10 PM Simon Peyton Jones via Haskell-Cafe
mailto:haskell-cafe@haskell.org> wrote: Friends____
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.____
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: /why should I even care about Haskell/? I’m too much of a biased witness.
____
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code. ____
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.____
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.____
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.____
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.____
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.____
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)____
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.____
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.____
Simon____
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On 07/12/2018 09:04 AM, Alexey Raga wrote:
Another thing is that sometimes I'd just play "Type Tetris" to make things compile and work. Try something, the compiler says "No, can't have this", perhaps make a suggestion, try another thing, "aha, next step", etc. Learned so much from these "games" :)
Ha! I like your approach. Thank you for your explanation. -- Brett Gilio brettg@posteo.net | bmg@member.fsf.org Free Software -- Free Society!

Le 12/07/2018 à 15:01, Brett Gilio reacts :
Alexey, could you expand on what you mean in your first point? I am quite intrigued. I do not use Haskell often, but that could be something of interest to me in-and-out of Haskell.
Brett Gilio
...major reasons
1. The ability to *define the specification (types) and then "just" follow them in implementation.* Sometimes even without having a clear understanding of the things I was using, I felt (and still feel) guided towards the right solution. I am not Alexey Raga, who precises: when I wrote a signature for my function, the compiler will make its best to help me implement it. It will yell at me, it will not let me use things that I am not supposed to use (according to constraints), etc. === ... but I think that there is more to tell, since in *all languages* the compiler makes its best to profit from typing in order to optimize the implementation... With polymorphic typing and automatic type inference, the compiler can do a little more, it seems that people forgot already the "toy" (not so...) of Lennart Augustsson, named Djinn, which takes a type and
Alexey Raga wrote: proposes an object of this type, using the intuitionistic logic theorem prover (the current Djinn library dates back to 2014, but Lennart manufactured it already in 2006, or before, if I am not mistaken. He quotes Roy Dyckhoff, and Don Stewart). Here is a test: *Djinn> f ? (a,b) -> (b,a) -- my input f :: (a, b) -> (b, a) f (a, b) = (b, a)* Doaitse Swierstra comments:
swap (a, b) = (b, a)
Once you ask for the type you get (a, b) -> (b, a), hence the type completely specifies what swap computes, and/*the function is even more general than the version of the type above*/. I don't see that last point... Anyway, the typing power of Haskell should be known. Thx.
Jerzy Karczmarczuk --- L'absence de virus dans ce courrier électronique a été vérifiée par le logiciel antivirus Avast. https://www.avast.com/antivirus

Some code examples that I usually show/explain to someone who is interested in Haskell but have no previous exposure: ADTs. Even for something as simple as data ShoppingCartCommand = CreateCart UserId | AddToCart CartId ProductId Quantity | ClearCart CartId deriving (Eq, Show) This would be many-many lines of non-trivial C# or Java code (you would need to think about abstract classes or interfaces, correctly override toString, equality and getHashCode, write tests for all this, etc). If the audience is familiar with C#, then explaining the ability to abstract over type constructors may work well. In C# there is no way to generalise over, say, IEnumerable<T> and IObservable<T>. If you want to accept both then you'd have to write the same LINQ statements twice (or convert one into another, which is not always possible). Regards, Alexey. On Wed, Jul 11, 2018 at 10:10 PM Simon Peyton Jones via Haskell-Cafe < haskell-cafe@haskell.org> wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On Wed, 11 Jul 2018, 14:10 Simon Peyton Jones via Haskell-Cafe, < haskell-cafe@haskell.org> wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
I wrote something about just this a few years ago, one such moment for me: http://therning.org/magnus/posts/2007-10-22-324-aha-one-liners.html /M

Oh, and this may not be appropriate for your particular audience, but I definitely like the fact that you can write some algorithms extremely elegantly with recursion schemes, viz. | import Data.Functor.Foldable import Data.Ratio (Ratio, denominator, (%)) isInteger :: (RealFrac a) => a -> Bool isInteger = idem (realToFrac . floor) where idem = ((==) <*>) continuedFraction :: (RealFrac a, Integral b) => a -> [b] continuedFraction = apo coalgebra where coalgebra x | isInteger x = go $ Left [] | otherwise = go $ Right alpha where alpha = 1 / (x - realToFrac (floor x)) go = Cons (floor x) | I wrote up a whole bunch of examples here: http://blog.vmchale.com/article/recursion-examples I think such examples are really great because they are things which are not possible in Rust (or F#, if I am not mistaken). On 07/11/2018 07:10 AM, Simon Peyton Jones via Haskell-Cafe wrote:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: /why should I even care about Haskell/? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
* Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Definitely after reading the book "category theory for programmers" by Bartosz Milewski. The next step is to get a link from algorithms to category theory to get: algorithms -> category theory -> haskell -- Fabien

Hi simon, I feel a lot of aha in Haskell. However, the most basic characterise I think are the following. * Algebraic data type (*Extremely wonderful*) * Pattern matches * Higher-order function * Partial application * Polymorphic type * Recursion and Lazy pattern I show its features in the code below. Simple code example 1: ```Haskell -- Algebraic data type data DNA = A | C | G | T deriving Show -- Using algebraic data type with list dna1 :: [DNA] dna1 = [A,A,T,C,C,G,C,T,A,G] -- Pattern matches abbrev :: DNA -> String abbrev A = "adenine" abbrev C = "cytosine" abbrev G = "guanine" abbrev T = "thymine" -- Higher-order function and Partial application abbrevDNAs = map abbrev -- Interactive example {- ghci> abbrevDNAs [A,A,C,G,A,T] ["adenine","adenine","cytosine","guanine","adenine","thymine"] -} ``` Simple code example 2: ```Haskell -- Algebraic data type data RNA = A | C | G | U deriving Show data Amino = Ala | Arg | Asn | Asp | Cys | Gln | Glu | Gly | His | Ile | Leu | Lys | Met | Phe | Pro | Ser | Thr | Trp | Tyr | Val | Error deriving Show -- Pattern matches rna2amino :: [RNA] -> Amino rna2amino [A,A,A] = Lys rna2amino [A,A,G] = Lys rna2amino [G,A,A] = Glu rna2amino [A,U,G] = Met rna2amino [C,A,U] = His rna2amino _ = Error -- Higher-order function convert :: [RNA] -> [Amino] convert xss = map rna2amino $ splitN 3 xss -- Polymorphic type, Recursion and Lazy pattern splitN :: Int -> [a] -> [[a]] splitN _ [] = [] splitN n xs = let (as,bs) = splitAt n xs in as : splitN n bs -- Interactive example {- ghci> convert [A,A,A,G,A,A,A,U,G,C,A,U] [Lys,Glu,Met,His] -} ``` Of course, I am not familiar with genom :) Regards, Takenobu 2018-07-11 21:10 GMT+09:00 Simon Peyton Jones via Haskell-Cafe < haskell-cafe@haskell.org>:
Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute https://www.sanger.ac.uk/ about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: *why should I even care about Haskell*? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
- Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.
The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

After a chat with a genomics Ph.D student, a couple of small ideas (and I'd also like to +1 Takenobu's suggestions): - Purity and laziness allowing the separation of producer and consumer for an arbitrary data model. For example, if you've got a function "legalNextMoves :: ChessBoard -> [ChessBoard]", you can easily construct an efficient tree of all possible chess games, "allGames :: Tree ChessBoard" and write different consumer functions to traverse the tree separately -- and walking the infinite tree is as simple as pattern-matching. - Large-scale refactoring with types: this is a huge selling point for Haskell in general, including at my job. The ability to have a codebase which defines data Shape = Circle Double | Square Double area :: Shape -> Double area = \case Circle r -> pi * r ^ 2 Square w -> w ^ 2 ...and simply change the type to... data Shape = Circle Double | Square Double | Rectangle Double Double ...and have GHC tell us with certainty every place where we've forgotten about the Rectangle case is a fantastic, fantastic benefit of Haskell in large codebases. Tom
El 11 jul 2018, a las 08:10, Simon Peyton Jones via Haskell-Cafe
escribió: Friends
In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute about Haskell. They do lots of programming, but they aren’t computer scientists.
I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: why should I even care about Haskell? I’m too much of a biased witness.
So I thought I’d ask you for help. War stories perhaps – how using Haskell worked (or didn’t) for you. But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.
Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about? Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time. The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them. I need examples that I can present in 5 minutes, without needing a long setup.
To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C. It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc. But it also makes much less good use of memory, and is likely to run slower. I think I can do that in 5 minutes.
Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors. Maybe 8 minutes.
But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.
Pointers to your favourite blog posts would be another avenue. (I love the Haskell Weekly News.)
Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.
Thank you! Perhaps your responses on this thread (if any) may be helpful to more than just me.
Simon
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
participants (44)
-
Alan & Kim Zimmerman
-
Alex Silva
-
Alexey Raga
-
Alexis King
-
amindfv@gmail.com
-
Brandon Allbery
-
Brett Gilio
-
Bryan Richter
-
Chris Smith
-
Conal Elliott
-
Damian Nadales
-
David Feuer
-
Doaitse Swierstra
-
Donn Cave
-
erik
-
Fabien R
-
Francesco Ariis
-
Graham Klyne
-
Henrik Nilsson
-
Ionuț G. Stan
-
J. Garrett Morris
-
Jake
-
Jay Sulzberger
-
Jeffrey Brown
-
Jerzy Karczmarczuk
-
Joachim Durchholz
-
John Wiegley
-
Krystal Maughan
-
Magnus Therning
-
Marcin Szamotulski
-
Neil Mayhew
-
Olga Ershova
-
Paul
-
PY
-
Scott Fleischman
-
Sergiu Ivanov
-
Simon Peyton Jones
-
Stefan Monnier
-
Stuart A. Kurtz
-
Takenobu Tani
-
Theodore Lief Gannon
-
Tom Ellis
-
Tony Morris
-
Vanessa McHale