Doubts about functional programming paradigm

I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help. Thanks Abhishek Kumar

I'd recommend writing out some code and then deciding. Functional
programming is not a panacea, just the challenges are in different places.
Proponents claim that the challenges are in the *right* place. Your mileage
might vary.
I recommend working through 'Real World Haskell' as a good place to start.
--Sanatan
On 11 Dec 2015 15:07, "Abhishek Kumar"
I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help. Thanks Abhishek Kumar
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners

If you're looking to start, I'll recommend the learnhaskell guide:
https://github.com/bitemyapp/learnhaskell
On 11 December 2015 at 20:42, Sanatan Rai
I'd recommend writing out some code and then deciding. Functional programming is not a panacea, just the challenges are in different places. Proponents claim that the challenges are in the *right* place. Your mileage might vary.
I recommend working through 'Real World Haskell' as a good place to start.
--Sanatan On 11 Dec 2015 15:07, "Abhishek Kumar"
wrote: I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help. Thanks Abhishek Kumar
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- Sumit Sahrawat, Junior - Mathematics and Computing, Indian Institute of Technology - BHU, Varanasi, India

parallel programming and concurrency but couldn't understand why?
Let's experiment: spread a programming task across large teams. in project A each programmer work on their area but may also change other's code. Programmers also depend on the progress their colleagues' make. in project B each programmer works on their own code with inputs and putputs well defined. There is no dependency on other programmers. Even if you did not work in both scenarios (project A & B), it is probably easy to imagine how both projects could progress.
how immutable data structures add to speed?
immutable data structures add to reliability.
what do we gain by writing everything as functions and pure code(without side effects)?
pure code gives consistency and allows to split larger programs into parts without fear for end result.

On 2015-12-11 at 10:07, Abhishek Kumar
I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help.
Functional languages make it easy to decompose problems in the way that MapReduce frameworks require. A few examples (fold is another name for reduce): sum :: [Double] -> Double sum xs = foldr (+) 0 xs sumSquares :: [Double] -> Double sumSquares xs = foldr (+) 0 (map (**2) xs) -- foldMap combines the map & fold steps -- The Monoid instance for String specifies how to combine 2 Strings -- Unlike numbers, there's only one consistent option unlines :: [Text] -> Text unlines xs = foldMap (`snoc` '\n') xs We'd need a few changes[1] to make this parallel and distribute across many computers, but expressing the part that changes for each new MapReduce task should stay easy. Immutable data by default helps with concurrency. Speed may or may not be the goal. We want to be able to distribute tasks (eg, function calls) across processor cores, and run them in different order, without introducing race conditions. Simon Marlow's book is great at explaining parallel & concurrent concepts, and the particular tools for applying them in Haskell: http://chimera.labs.oreilly.com/books/1230000000929 bergey Footnotes: [1] OK, many changes.

A pure functional language enables you to reason about your code,
something you can't easily achieve with your average C or Java. And by
`reason' I am referring to mathematical proof. Haskell makes it very
simple, actually. Why should you want to reason about your code?
Think the hassle you could avoid if you knew what your code really
meant and did when executed.
The absence of side effects is part of another concept in FP, namely,
`referential transparency'. If your function `f' maps a value `x' to
a value `y' then `f x' will always equal `y' and no more. In other
words, your function `f' won't change anything e.g. assign to
variables, or other state changes as well as mapping `x' to `y', and
that's an absolute certainty, in theory, at any rate.
That's a very crude overview of at least part of what functional
programming is about. I'm hoping it'll encourage others on this list
with far more in-depth knowledge of the subject matter to come in and
fill in the gaps and iron out the ambiguities.
Matthew
On 11/12/2015, Daniel Bergey
On 2015-12-11 at 10:07, Abhishek Kumar
wrote: I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help.
Functional languages make it easy to decompose problems in the way that MapReduce frameworks require. A few examples (fold is another name for reduce):
sum :: [Double] -> Double sum xs = foldr (+) 0 xs
sumSquares :: [Double] -> Double sumSquares xs = foldr (+) 0 (map (**2) xs)
-- foldMap combines the map & fold steps -- The Monoid instance for String specifies how to combine 2 Strings -- Unlike numbers, there's only one consistent option unlines :: [Text] -> Text unlines xs = foldMap (`snoc` '\n') xs
We'd need a few changes[1] to make this parallel and distribute across many computers, but expressing the part that changes for each new MapReduce task should stay easy.
Immutable data by default helps with concurrency. Speed may or may not be the goal. We want to be able to distribute tasks (eg, function calls) across processor cores, and run them in different order, without introducing race conditions.
Simon Marlow's book is great at explaining parallel & concurrent concepts, and the particular tools for applying them in Haskell: http://chimera.labs.oreilly.com/books/1230000000929
bergey
Footnotes: [1] OK, many changes.
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners

Building on that, I think coming to Haskell with a very specific goal in mind (like swap Haskell for Java in my map reduce problem) kind of misses the point. Haskell may or may not be faster/better suited to map reduce vs Java, but the real reason to use/learn Haskell is elegance and correctness. The lack of side effects and referential transparency means you're far more likely to prevent bugs. And there's a pretty substantial learning curve coming from imperative languages so if you need to speed up map reduce on a deadline you will be more productive in the imperative language of your choice (for now).
Dont take this as discouragement, I think Haskell (and FP in general) is very well suited to that kind of problem. I'm a beginner in Haskell and it's already had a huge impact on how I think about all the code I write, not just the occasional toy Haskell project.
On Dec 11, 2015 1:08 PM, MJ Williams
A pure functional language enables you to reason about your code, something you can't easily achieve with your average C or Java. And by `reason' I am referring to mathematical proof. Haskell makes it very simple, actually. Why should you want to reason about your code? Think the hassle you could avoid if you knew what your code really meant and did when executed.
The absence of side effects is part of another concept in FP, namely, `referential transparency'. If your function `f' maps a value `x' to a value `y' then `f x' will always equal `y' and no more. In other words, your function `f' won't change anything e.g. assign to variables, or other state changes as well as mapping `x' to `y', and that's an absolute certainty, in theory, at any rate.
That's a very crude overview of at least part of what functional programming is about. I'm hoping it'll encourage others on this list with far more in-depth knowledge of the subject matter to come in and fill in the gaps and iron out the ambiguities.
Matthew
On 11/12/2015, Daniel Bergey
wrote: On 2015-12-11 at 10:07, Abhishek Kumar
wrote: I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help.
Functional languages make it easy to decompose problems in the way that MapReduce frameworks require. A few examples (fold is another name for reduce):
sum :: [Double] -> Double sum xs = foldr (+) 0 xs
sumSquares :: [Double] -> Double sumSquares xs = foldr (+) 0 (map (**2) xs)
-- foldMap combines the map & fold steps -- The Monoid instance for String specifies how to combine 2 Strings -- Unlike numbers, there's only one consistent option unlines :: [Text] -> Text unlines xs = foldMap (`snoc` '\n') xs
We'd need a few changes[1] to make this parallel and distribute across many computers, but expressing the part that changes for each new MapReduce task should stay easy.
Immutable data by default helps with concurrency. Speed may or may not be the goal. We want to be able to distribute tasks (eg, function calls) across processor cores, and run them in different order, without introducing race conditions.
Simon Marlow's book is great at explaining parallel & concurrent concepts, and the particular tools for applying them in Haskell: http://chimera.labs.oreilly.com/books/1230000000929
bergey
Footnotes: [1] OK, many changes.
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners

Hey guys, This conversation is really interesting. I am not a haskell expert (I haven't progressed into monads in haskell yet, and have only briefly studied the state monad in scala. Studying fp concepts has changed the way I think about problems that are complicated. I am not quite there yet, (I still catch myself updating states when I don't need to) and wondering how the hell to break a problem into recursive little bits. I now notice that I am less tempted to do things that would have bugs in complicated data structures , prefer map to do something to a list, and folds to for loops that update a ``state'' variable, and finally, I can reason about where the weak points in my code may be. The learning I have achieved do to fp, and my principals of programming languages class that relied on fp, have made doing things with code easier because I don't try to do stupid things like overrun the end of an array with a misplaced loop counter variable. It is way too easy to learn fold, filter, map, and list comprehensions, you then have a powerful weapon to use against those ugly off by one errors. Also, I learned how people who learned programming in non-traditional languages might think to approach problems earlier this year. My stats professor was showing us things in r. She showed us a complicated set of formula that we needed to do, and then explained we were calculating something to each element of a list. She showed a function sapply(vector, function(elem)) that returns a vector. She said to think about how sapply applies that function to each vector element, and returns the transformed list. She didn't approach it as if it were this big bad function that takes a function, mainly i think because she hadn't learned programming from people who insist on the idea of c. It also really made sense to the class who mainly had little to no programming experience, where explaining a for loop in all it's glory would normally take a couple of lectures. She is a really solid programmer and really understands how to use programming to solve real world problems, so I am not saying that she didn't know enough to have not learned for loops, just that she immediately realized that the sapply function really was better for the situation. If we teach people these patterns from the get go, I think some of the horror of learning functional programming would be solved because the number of applications that a generic function can be applied in far outnumbers the number of cases a for loop or state momad is needed. I would also now argue that all data structures classes should be taught in functional programming languages. I never solidly understood trees and could confidently traverse and change them until I actually got introduced to pattern matching and folds. I was taught binary search trees, and red-black trees, and worked with tree like structures, but it was always hard for me to comprehend how to do things with them. I learned linked lists in c++ but hated them with a passion because I had to write forloops that updated a temporary variable and write while loops that did the same, but often jumped off the end of the list and then I couldn't go back. The beauty of recursion and lists is that recursion allows you to backtrack when things go wrong (as they always will for any real input in a program). The second I learned about Haskell for the first time, linked list traversing became second nature to me, even in non-functional (inferier) c++. The argument that "recursion results in overriding the stack" is kind of a flawed one since the compiler wizards have figured out ways to optimize tail recursive functions to do exactly what we humans are bad at (running recursive functions as if they were unrolled, with large inputs, and fast). Thanks, Derek On 12/11/2015 11:32 AM, Thomas Jakway wrote:
Building on that, I think coming to Haskell with a very specific goal in mind (like swap Haskell for Java in my map reduce problem) kind of misses the point. Haskell may or may not be faster/better suited to map reduce vs Java, but the real reason to use/learn Haskell is elegance and correctness. The lack of side effects and referential transparency means you're far more likely to prevent bugs. And there's a pretty substantial learning curve coming from imperative languages so if you need to speed up map reduce on a deadline you will be more productive in the imperative language of your choice (for now).
Dont take this as discouragement, I think Haskell (and FP in general) is very well suited to that kind of problem. I'm a beginner in Haskell and it's already had a huge impact on how I think about all the code I write, not just the occasional toy Haskell project.
On Dec 11, 2015 1:08 PM, MJ Williams
wrote: A pure functional language enables you to reason about your code, something you can't easily achieve with your average C or Java. And by `reason' I am referring to mathematical proof. Haskell makes it very simple, actually. Why should you want to reason about your code? Think the hassle you could avoid if you knew what your code really meant and did when executed.
The absence of side effects is part of another concept in FP, namely, `referential transparency'. If your function `f' maps a value `x' to a value `y' then `f x' will always equal `y' and no more. In other words, your function `f' won't change anything e.g. assign to variables, or other state changes as well as mapping `x' to `y', and that's an absolute certainty, in theory, at any rate.
That's a very crude overview of at least part of what functional programming is about. I'm hoping it'll encourage others on this list with far more in-depth knowledge of the subject matter to come in and fill in the gaps and iron out the ambiguities.
Matthew
On 11/12/2015, Daniel Bergey
wrote: On 2015-12-11 at 10:07, Abhishek Kumar
wrote: I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help. Functional languages make it easy to decompose problems in the way that MapReduce frameworks require. A few examples (fold is another name for reduce):
sum :: [Double] -> Double sum xs = foldr (+) 0 xs
sumSquares :: [Double] -> Double sumSquares xs = foldr (+) 0 (map (**2) xs)
-- foldMap combines the map & fold steps -- The Monoid instance for String specifies how to combine 2 Strings -- Unlike numbers, there's only one consistent option unlines :: [Text] -> Text unlines xs = foldMap (`snoc` '\n') xs
We'd need a few changes[1] to make this parallel and distribute across many computers, but expressing the part that changes for each new MapReduce task should stay easy.
Immutable data by default helps with concurrency. Speed may or may not be the goal. We want to be able to distribute tasks (eg, function calls) across processor cores, and run them in different order, without introducing race conditions.
Simon Marlow's book is great at explaining parallel & concurrent concepts, and the particular tools for applying them in Haskell: http://chimera.labs.oreilly.com/books/1230000000929
bergey
Footnotes: [1] OK, many changes.
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- ------------------------------------------------------------------------ Derek Riemer * Department of computer science, third year undergraduate student. * Proud user of the NVDA screen reader. * Open source enthusiast. * Member of Bridge Cu * Avid skiier. Websites: Honors portfolio http://derekriemer.drupalgardens.com Non-proffessional website. http://derekriemer.pythonanywhere.com/personal Awesome little hand built weather app that rocks! http://derekriemer.pythonanywhere.com/weather email me at derek.riemer@colorado.edu mailto:derek.riemer@colorado.edu Phone: (303) 906-2194

Purity makes type signatures astonishingly informative. Often -- in fact
usually! -- one can determine what a function does simply from its name and
type signature.
The opportunities for bad coding are fewer in Haskell. Before Haskell, I
was a proud Python programmer -- proud, thinking I was good at selecting
the right way from a forest of wrong ways. In Haskell I find less
opportunity for self-congratulation, because most of the wrong ways are no
longer available.
On Tue, Dec 15, 2015 at 9:00 AM, derek riemer
Hey guys, This conversation is really interesting. I am not a haskell expert (I haven't progressed into monads in haskell yet, and have only briefly studied the state monad in scala. Studying fp concepts has changed the way I think about problems that are complicated. I am not quite there yet, (I still catch myself updating states when I don't need to) and wondering how the hell to break a problem into recursive little bits. I now notice that I am less tempted to do things that would have bugs in complicated data structures , prefer map to do something to a list, and folds to for loops that update a ``state'' variable, and finally, I can reason about where the weak points in my code may be. The learning I have achieved do to fp, and my principals of programming languages class that relied on fp, have made doing things with code easier because I don't try to do stupid things like overrun the end of an array with a misplaced loop counter variable. It is way too easy to learn fold, filter, map, and list comprehensions, you then have a powerful weapon to use against those ugly off by one errors. Also, I learned how people who learned programming in non-traditional languages might think to approach problems earlier this year. My stats professor was showing us things in r. She showed us a complicated set of formula that we needed to do, and then explained we were calculating something to each element of a list. She showed a function sapply(vector, function(elem)) that returns a vector. She said to think about how sapply applies that function to each vector element, and returns the transformed list. She didn't approach it as if it were this big bad function that takes a function, mainly i think because she hadn't learned programming from people who insist on the idea of c. It also really made sense to the class who mainly had little to no programming experience, where explaining a for loop in all it's glory would normally take a couple of lectures. She is a really solid programmer and really understands how to use programming to solve real world problems, so I am not saying that she didn't know enough to have not learned for loops, just that she immediately realized that the sapply function really was better for the situation. If we teach people these patterns from the get go, I think some of the horror of learning functional programming would be solved because the number of applications that a generic function can be applied in far outnumbers the number of cases a for loop or state momad is needed. I would also now argue that all data structures classes should be taught in functional programming languages. I never solidly understood trees and could confidently traverse and change them until I actually got introduced to pattern matching and folds. I was taught binary search trees, and red-black trees, and worked with tree like structures, but it was always hard for me to comprehend how to do things with them. I learned linked lists in c++ but hated them with a passion because I had to write forloops that updated a temporary variable and write while loops that did the same, but often jumped off the end of the list and then I couldn't go back. The beauty of recursion and lists is that recursion allows you to backtrack when things go wrong (as they always will for any real input in a program). The second I learned about Haskell for the first time, linked list traversing became second nature to me, even in non-functional (inferier) c++. The argument that "recursion results in overriding the stack" is kind of a flawed one since the compiler wizards have figured out ways to optimize tail recursive functions to do exactly what we humans are bad at (running recursive functions as if they were unrolled, with large inputs, and fast). Thanks, Derek
On 12/11/2015 11:32 AM, Thomas Jakway wrote:
Building on that, I think coming to Haskell with a very specific goal in mind (like swap Haskell for Java in my map reduce problem) kind of misses the point. Haskell may or may not be faster/better suited to map reduce vs Java, but the real reason to use/learn Haskell is elegance and correctness. The lack of side effects and referential transparency means you're far more likely to prevent bugs. And there's a pretty substantial learning curve coming from imperative languages so if you need to speed up map reduce on a deadline you will be more productive in the imperative language of your choice (for now).
Dont take this as discouragement, I think Haskell (and FP in general) is very well suited to that kind of problem. I'm a beginner in Haskell and it's already had a huge impact on how I think about all the code I write, not just the occasional toy Haskell project.
On Dec 11, 2015 1:08 PM, MJ Williams
wrote: A pure functional language enables you to reason about your code, something you can't easily achieve with your average C or Java. And by `reason' I am referring to mathematical proof. Haskell makes it very simple, actually. Why should you want to reason about your code? Think the hassle you could avoid if you knew what your code really meant and did when executed.
The absence of side effects is part of another concept in FP, namely, `referential transparency'. If your function `f' maps a value `x' to a value `y' then `f x' will always equal `y' and no more. In other words, your function `f' won't change anything e.g. assign to variables, or other state changes as well as mapping `x' to `y', and that's an absolute certainty, in theory, at any rate.
That's a very crude overview of at least part of what functional programming is about. I'm hoping it'll encourage others on this list with far more in-depth knowledge of the subject matter to come in and fill in the gaps and iron out the ambiguities.
Matthew
On 11/12/2015, Daniel Bergey
wrote: On 2015-12-11 at 10:07, Abhishek Kumar
wrote: I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help.
Functional languages make it easy to decompose problems in the way that MapReduce frameworks require. A few examples (fold is another name for reduce):
sum :: [Double] -> Double sum xs = foldr (+) 0 xs
sumSquares :: [Double] -> Double sumSquares xs = foldr (+) 0 (map (**2) xs)
-- foldMap combines the map & fold steps -- The Monoid instance for String specifies how to combine 2 Strings -- Unlike numbers, there's only one consistent option unlines :: [Text] -> Text unlines xs = foldMap (`snoc` '\n') xs
We'd need a few changes[1] to make this parallel and distribute across many computers, but expressing the part that changes for each new MapReduce task should stay easy.
Immutable data by default helps with concurrency. Speed may or may not be the goal. We want to be able to distribute tasks (eg, function calls) across processor cores, and run them in different order, without introducing race conditions.
Simon Marlow's book is great at explaining parallel & concurrent concepts, and the particular tools for applying them in Haskell: http://chimera.labs.oreilly.com/books/1230000000929
bergey
Footnotes: [1] OK, many changes.
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing listBeginners@haskell.orghttp://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- ------------------------------ Derek Riemer
- Department of computer science, third year undergraduate student. - Proud user of the NVDA screen reader. - Open source enthusiast. - Member of Bridge Cu - Avid skiier.
Websites: Honors portfolio http://derekriemer.drupalgardens.com Non-proffessional website. http://derekriemer.pythonanywhere.com/personal Awesome little hand built weather app that rocks! http://derekriemer.pythonanywhere.com/weather
email me at derek.riemer@colorado.edu
Phone: (303) 906-2194 _______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- Jeffrey Benjamin Brown

Regarding concurrency+immutability with respect to both reliability and
performance:
One way to think about synchronizing concurrent programs is by sharing
memory. If the content of that memory changes, then there is a risk of race
conditions arising in the affected programs. (A common source of vexing
bugs, and complications for compilers.) But if the contents are somehow
guaranteed not to change (ie. a specific definition of immutability), then
no race conditions are possible for the lifetime of access to that memory.
Although this is a greatly simplified illustrative explanation, it is
generally at the heart of arguments for immutability aiding performance.
Unchanging regions of memory tend to permit simpler sorts of models since
limitations are lifted on synchronization. This in turn allows both more
freedom to pursue many otherwise tricky optimizations, such as ex. deciding
when to duplicate based on cache geometry, trivially remembering old
results, etc.
Regarding the discourse on purely functional programs not having side
effects:
Writing pure programs without side effects is a little tricky to talk
about, since this has some very precise technical meanings depending on
whom you talk to. (What constitutes an effect? Where is the line between
intentional and unintentional drawn?)
Maybe think of this statement as part of the continuum of arguments about
languages that allow us to write simpler programs that more precisely state
the intended effects.
Cheers,
Darren
On Dec 11, 2015 07:07, "Abhishek Kumar"
I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help. Thanks Abhishek Kumar
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners

Having the option of communicating by sharing memory, message-passing style
(copying), or copy-on-write semantics in Haskell programs is why I've found
Haskell to be a real pleasure for performance (latency, mostly) sensitive
services I frequently work on. I get a superset of options in Haskell for
which I don't have any one choice that can really match it in concurrency
problems or single-machine parallelism. There's some work to be done to
catch up to OTP, but the community is inching its way a few directions
(Cloud Haskell/distributed haskell, courier, streaming lib + networking,
etc.)
Generally I prefer to build out services in a conventional style (breaking
out capacities like message backends or persistence into separate
machines), but the workers/app servers are all in Haskell. That is, I don't
try to replicate the style of cluster you'd see with Erlang services in
Haskell, but I know people that have done so and were happy with the
result. Being able to have composable concurrency via STM without
compromising correctness is _no small thing_ and the cheap threads along
with other features of Haskell have served to make it so that concurrency
and parallelization of Haskell programs can be a much more modular process
than I've experienced in many other programming languages. It makes it so
that I can write programs totally oblivious to concurrency or parallelism
and then layer different strategies of parallelization or synchronization
after the fact, changing them out at runtime if I so desire! This is only
possible for me in Haskell because of the non-strict semantics and
incredible kit we have at our disposal thanks to the efforts of Simon
Marlow and others. Much of this is ably covered in Marlow's book at:
http://chimera.labs.oreilly.com/books/1230000000929
Side bar: although using "pure" with respect to effects is the common usage
now, I'd urge you to consider finding a different wording since the
original (and IMHO more meaningful) denotation of pure functional
programming was about semantics and not the presence or absence of effects.
The meaning was that you had a programming language whose semantics were
lambda-calculus-and-nothing-more. This can be contrasted with ML where the
lambda calculus is augmented with an imperative language that isn't
functional or a lambda calculus. Part of the problem with making purity
about effects rather than semantics is the terrible imprecision confuses
new people. They'll often misunderstand it as, "Haskell programs can't
perform effects" or they'll think it means stuff in "IO" isn't pure - which
is false. We benefit from having a pure functionalal programming language
_especially_ in programs that emit effects. Gabriel Gonzalez has a nice
article demonstrating some of this:
http://www.haskellforall.com/2015/03/algebraic-side-effects.html
When I want to talk about effects, I say "effect". When I want to say
something that doesn't emit effects, I say "effect-free" and when it does,
"effectful". Sometimes I'll say "in IO" for the latter as well, where "in
IO" can be any type that has IO in the outermost position of the final
return type.
But, in the end, I'm not really here to convince anybody to use Haskell.
I'm working on http://haskellbook.com/ with my coauthor Julie because I
thought it was unreasonably difficult and time-consuming to learn a
language that is quite pleasant and productive to use in my day to day
work. If Haskell picks up in popularity, cool - more libraries! If not,
then it remains an uncommon and not well-understood competitive advantage
in my work. I'm not sure I mind either outcome as long as the community
doesn't contract and it seems to be doing the opposite of that lately.
I use Haskell because I'm lazy and impatient. I do not tolerate tedious,
preventable work well. Haskell lets me break down my problems into
digestible units, it forces the APIs I consume to declare what chicanery
they're up to, it gives me the nicest kit for my work I've ever had at my
disposal. It's not perfect - it's best if you're comfortable with a unix-y
toolkit, but there's Haskellers on Windows keeping the lights on too.
Best of luck to Abhishek whatever they decide to do from here. I won't
pretend Haskell is "easy" - you have to learn more before you can write the
typical software project, but it's an upfront cliff sorta thing that
converts into a long-term advantage if you're willing to do the work. This
is more the case than what I found with Clojure, Erlang, Java, C++, Go,
etc. They all have a gentler upfront productivity cliff, but don't pay off
nearly as well long-term in my experience. YMMV.
On Fri, Dec 11, 2015 at 3:13 PM, Darren Grant
Regarding concurrency+immutability with respect to both reliability and performance:
One way to think about synchronizing concurrent programs is by sharing memory. If the content of that memory changes, then there is a risk of race conditions arising in the affected programs. (A common source of vexing bugs, and complications for compilers.) But if the contents are somehow guaranteed not to change (ie. a specific definition of immutability), then no race conditions are possible for the lifetime of access to that memory.
Although this is a greatly simplified illustrative explanation, it is generally at the heart of arguments for immutability aiding performance. Unchanging regions of memory tend to permit simpler sorts of models since limitations are lifted on synchronization. This in turn allows both more freedom to pursue many otherwise tricky optimizations, such as ex. deciding when to duplicate based on cache geometry, trivially remembering old results, etc.
Regarding the discourse on purely functional programs not having side effects:
Writing pure programs without side effects is a little tricky to talk about, since this has some very precise technical meanings depending on whom you talk to. (What constitutes an effect? Where is the line between intentional and unintentional drawn?)
Maybe think of this statement as part of the continuum of arguments about languages that allow us to write simpler programs that more precisely state the intended effects.
Cheers, Darren On Dec 11, 2015 07:07, "Abhishek Kumar"
wrote: I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help. Thanks Abhishek Kumar
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- Chris Allen Currently working on http://haskellbook.com

What is a `race condition'?
On 11/12/2015, Christopher Allen
Having the option of communicating by sharing memory, message-passing style (copying), or copy-on-write semantics in Haskell programs is why I've found Haskell to be a real pleasure for performance (latency, mostly) sensitive services I frequently work on. I get a superset of options in Haskell for which I don't have any one choice that can really match it in concurrency problems or single-machine parallelism. There's some work to be done to catch up to OTP, but the community is inching its way a few directions (Cloud Haskell/distributed haskell, courier, streaming lib + networking, etc.)
Generally I prefer to build out services in a conventional style (breaking out capacities like message backends or persistence into separate machines), but the workers/app servers are all in Haskell. That is, I don't try to replicate the style of cluster you'd see with Erlang services in Haskell, but I know people that have done so and were happy with the result. Being able to have composable concurrency via STM without compromising correctness is _no small thing_ and the cheap threads along with other features of Haskell have served to make it so that concurrency and parallelization of Haskell programs can be a much more modular process than I've experienced in many other programming languages. It makes it so that I can write programs totally oblivious to concurrency or parallelism and then layer different strategies of parallelization or synchronization after the fact, changing them out at runtime if I so desire! This is only possible for me in Haskell because of the non-strict semantics and incredible kit we have at our disposal thanks to the efforts of Simon Marlow and others. Much of this is ably covered in Marlow's book at: http://chimera.labs.oreilly.com/books/1230000000929
Side bar: although using "pure" with respect to effects is the common usage now, I'd urge you to consider finding a different wording since the original (and IMHO more meaningful) denotation of pure functional programming was about semantics and not the presence or absence of effects. The meaning was that you had a programming language whose semantics were lambda-calculus-and-nothing-more. This can be contrasted with ML where the lambda calculus is augmented with an imperative language that isn't functional or a lambda calculus. Part of the problem with making purity about effects rather than semantics is the terrible imprecision confuses new people. They'll often misunderstand it as, "Haskell programs can't perform effects" or they'll think it means stuff in "IO" isn't pure - which is false. We benefit from having a pure functionalal programming language _especially_ in programs that emit effects. Gabriel Gonzalez has a nice article demonstrating some of this: http://www.haskellforall.com/2015/03/algebraic-side-effects.html
When I want to talk about effects, I say "effect". When I want to say something that doesn't emit effects, I say "effect-free" and when it does, "effectful". Sometimes I'll say "in IO" for the latter as well, where "in IO" can be any type that has IO in the outermost position of the final return type.
But, in the end, I'm not really here to convince anybody to use Haskell. I'm working on http://haskellbook.com/ with my coauthor Julie because I thought it was unreasonably difficult and time-consuming to learn a language that is quite pleasant and productive to use in my day to day work. If Haskell picks up in popularity, cool - more libraries! If not, then it remains an uncommon and not well-understood competitive advantage in my work. I'm not sure I mind either outcome as long as the community doesn't contract and it seems to be doing the opposite of that lately.
I use Haskell because I'm lazy and impatient. I do not tolerate tedious, preventable work well. Haskell lets me break down my problems into digestible units, it forces the APIs I consume to declare what chicanery they're up to, it gives me the nicest kit for my work I've ever had at my disposal. It's not perfect - it's best if you're comfortable with a unix-y toolkit, but there's Haskellers on Windows keeping the lights on too.
Best of luck to Abhishek whatever they decide to do from here. I won't pretend Haskell is "easy" - you have to learn more before you can write the typical software project, but it's an upfront cliff sorta thing that converts into a long-term advantage if you're willing to do the work. This is more the case than what I found with Clojure, Erlang, Java, C++, Go, etc. They all have a gentler upfront productivity cliff, but don't pay off nearly as well long-term in my experience. YMMV.
On Fri, Dec 11, 2015 at 3:13 PM, Darren Grant
wrote: Regarding concurrency+immutability with respect to both reliability and performance:
One way to think about synchronizing concurrent programs is by sharing memory. If the content of that memory changes, then there is a risk of race conditions arising in the affected programs. (A common source of vexing bugs, and complications for compilers.) But if the contents are somehow guaranteed not to change (ie. a specific definition of immutability), then no race conditions are possible for the lifetime of access to that memory.
Although this is a greatly simplified illustrative explanation, it is generally at the heart of arguments for immutability aiding performance. Unchanging regions of memory tend to permit simpler sorts of models since limitations are lifted on synchronization. This in turn allows both more freedom to pursue many otherwise tricky optimizations, such as ex. deciding when to duplicate based on cache geometry, trivially remembering old results, etc.
Regarding the discourse on purely functional programs not having side effects:
Writing pure programs without side effects is a little tricky to talk about, since this has some very precise technical meanings depending on whom you talk to. (What constitutes an effect? Where is the line between intentional and unintentional drawn?)
Maybe think of this statement as part of the continuum of arguments about languages that allow us to write simpler programs that more precisely state the intended effects.
Cheers, Darren On Dec 11, 2015 07:07, "Abhishek Kumar"
wrote: I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help. Thanks Abhishek Kumar
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- Chris Allen Currently working on http://haskellbook.com

Two or more code units want to do basically the same thing at almost the
same moment. Whichever one does the thing first (or sometimes, second)
wins, and the other loses.
EG, imagine two processes communicating over shared memory. If they both
want to write to variable x in shared memory at almost the same moment,
whichever one writes second "wins", because the write of the first is wiped
out by the write of the second.
Some race conditions aren't much of a problem, but some of them can be a
source of really hard-to-track-down bugs.
On Fri, Dec 11, 2015 at 4:45 PM, MJ Williams
What is a `race condition'?
Having the option of communicating by sharing memory, message-passing
(copying), or copy-on-write semantics in Haskell programs is why I've found Haskell to be a real pleasure for performance (latency, mostly) sensitive services I frequently work on. I get a superset of options in Haskell for which I don't have any one choice that can really match it in concurrency problems or single-machine parallelism. There's some work to be done to catch up to OTP, but the community is inching its way a few directions (Cloud Haskell/distributed haskell, courier, streaming lib + networking, etc.)
Generally I prefer to build out services in a conventional style (breaking out capacities like message backends or persistence into separate machines), but the workers/app servers are all in Haskell. That is, I don't try to replicate the style of cluster you'd see with Erlang services in Haskell, but I know people that have done so and were happy with the result. Being able to have composable concurrency via STM without compromising correctness is _no small thing_ and the cheap threads along with other features of Haskell have served to make it so that concurrency and parallelization of Haskell programs can be a much more modular
than I've experienced in many other programming languages. It makes it so that I can write programs totally oblivious to concurrency or parallelism and then layer different strategies of parallelization or synchronization after the fact, changing them out at runtime if I so desire! This is only possible for me in Haskell because of the non-strict semantics and incredible kit we have at our disposal thanks to the efforts of Simon Marlow and others. Much of this is ably covered in Marlow's book at: http://chimera.labs.oreilly.com/books/1230000000929
Side bar: although using "pure" with respect to effects is the common usage now, I'd urge you to consider finding a different wording since the original (and IMHO more meaningful) denotation of pure functional programming was about semantics and not the presence or absence of effects. The meaning was that you had a programming language whose semantics were lambda-calculus-and-nothing-more. This can be contrasted with ML where
lambda calculus is augmented with an imperative language that isn't functional or a lambda calculus. Part of the problem with making purity about effects rather than semantics is the terrible imprecision confuses new people. They'll often misunderstand it as, "Haskell programs can't perform effects" or they'll think it means stuff in "IO" isn't pure - which is false. We benefit from having a pure functionalal programming language _especially_ in programs that emit effects. Gabriel Gonzalez has a nice article demonstrating some of this: http://www.haskellforall.com/2015/03/algebraic-side-effects.html
When I want to talk about effects, I say "effect". When I want to say something that doesn't emit effects, I say "effect-free" and when it does, "effectful". Sometimes I'll say "in IO" for the latter as well, where "in IO" can be any type that has IO in the outermost position of the final return type.
But, in the end, I'm not really here to convince anybody to use Haskell. I'm working on http://haskellbook.com/ with my coauthor Julie because I thought it was unreasonably difficult and time-consuming to learn a language that is quite pleasant and productive to use in my day to day work. If Haskell picks up in popularity, cool - more libraries! If not, then it remains an uncommon and not well-understood competitive advantage in my work. I'm not sure I mind either outcome as long as the community doesn't contract and it seems to be doing the opposite of that lately.
I use Haskell because I'm lazy and impatient. I do not tolerate tedious, preventable work well. Haskell lets me break down my problems into digestible units, it forces the APIs I consume to declare what chicanery they're up to, it gives me the nicest kit for my work I've ever had at my disposal. It's not perfect - it's best if you're comfortable with a unix-y toolkit, but there's Haskellers on Windows keeping the lights on too.
Best of luck to Abhishek whatever they decide to do from here. I won't pretend Haskell is "easy" - you have to learn more before you can write
On 11/12/2015, Christopher Allen
wrote: style process the the typical software project, but it's an upfront cliff sorta thing that converts into a long-term advantage if you're willing to do the work. This is more the case than what I found with Clojure, Erlang, Java, C++, Go, etc. They all have a gentler upfront productivity cliff, but don't pay off nearly as well long-term in my experience. YMMV.
On Fri, Dec 11, 2015 at 3:13 PM, Darren Grant
wrote: Regarding concurrency+immutability with respect to both reliability and performance:
One way to think about synchronizing concurrent programs is by sharing memory. If the content of that memory changes, then there is a risk of race conditions arising in the affected programs. (A common source of vexing bugs, and complications for compilers.) But if the contents are somehow guaranteed not to change (ie. a specific definition of immutability), then no race conditions are possible for the lifetime of access to that memory.
Although this is a greatly simplified illustrative explanation, it is generally at the heart of arguments for immutability aiding performance. Unchanging regions of memory tend to permit simpler sorts of models since limitations are lifted on synchronization. This in turn allows both more freedom to pursue many otherwise tricky optimizations, such as ex. deciding when to duplicate based on cache geometry, trivially remembering old results, etc.
Regarding the discourse on purely functional programs not having side effects:
Writing pure programs without side effects is a little tricky to talk about, since this has some very precise technical meanings depending on whom you talk to. (What constitutes an effect? Where is the line between intentional and unintentional drawn?)
Maybe think of this statement as part of the continuum of arguments about languages that allow us to write simpler programs that more precisely state the intended effects.
Cheers, Darren On Dec 11, 2015 07:07, "Abhishek Kumar"
wrote: I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help. Thanks Abhishek Kumar
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- Chris Allen Currently working on http://haskellbook.com
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- Dan Stromberg

+1 for composable concurrency through STM. It multiplies programmer effectiveness many times when doing concurrent programming. I now consider threads, condition variables and locks "cruel and unusual punishment". However, the learning curve in Haskell is not a small problem. It is *the* crucial problem with the language. I believe it is exactly what will prevent the language for becoming mainstream in the foreseeable future (e.g. 10 years). Consider a snippet of an error message from GHC that previously appeared in this list Couldn't match expected type ‘r’ with actual type ‘COParseResult’ ‘r’ is a rigid type variable bound by the type signature for getParseResult :: ParseResult r => String -> IO r ... I have a PhD in computer science, but never really liked programming languages back then and somehow I never learned what a "rigid type variable" is. I was happily chugging along until I came to haskell and was confronted with this error message. Does one have to be a type theorist to make sense of this error message? Strictly speaking, I am complaining about GHC not haskell. But would anyone care to explain to a novice in a couple paragraphs why foldl (+) 0 [1..10^9] may take 10 Gigs of RAM to calculate? My advice to anyone who wants to learn haskell is to join social groups as we are not yet ready to teach it through books. I frequently have to go read academic papers to learn new topics. That shows me that the knowledge used to build the language simply has not been digested enough to be easily accessible to the average programmer. Summarizing, Learning Haskell: It will be a bumpy ride, but there *is* a happy ending. Cheers, Dimitri On 12/11/15 7:36 PM, Christopher Allen wrote:
Having the option of communicating by sharing memory, message-passing style (copying), or copy-on-write semantics in Haskell programs is why I've found Haskell to be a real pleasure for performance (latency, mostly) sensitive services I frequently work on. I get a superset of options in Haskell for which I don't have any one choice that can really match it in concurrency problems or single-machine parallelism. There's some work to be done to catch up to OTP, but the community is inching its way a few directions (Cloud Haskell/distributed haskell, courier, streaming lib + networking, etc.)
Generally I prefer to build out services in a conventional style (breaking out capacities like message backends or persistence into separate machines), but the workers/app servers are all in Haskell. That is, I don't try to replicate the style of cluster you'd see with Erlang services in Haskell, but I know people that have done so and were happy with the result. Being able to have composable concurrency via STM without compromising correctness is _no small thing_ and the cheap threads along with other features of Haskell have served to make it so that concurrency and parallelization of Haskell programs can be a much more modular process than I've experienced in many other programming languages. It makes it so that I can write programs totally oblivious to concurrency or parallelism and then layer different strategies of parallelization or synchronization after the fact, changing them out at runtime if I so desire! This is only possible for me in Haskell because of the non-strict semantics and incredible kit we have at our disposal thanks to the efforts of Simon Marlow and others. Much of this is ably covered in Marlow's book at: http://chimera.labs.oreilly.com/books/1230000000929
Side bar: although using "pure" with respect to effects is the common usage now, I'd urge you to consider finding a different wording since the original (and IMHO more meaningful) denotation of pure functional programming was about semantics and not the presence or absence of effects. The meaning was that you had a programming language whose semantics were lambda-calculus-and-nothing-more. This can be contrasted with ML where the lambda calculus is augmented with an imperative language that isn't functional or a lambda calculus. Part of the problem with making purity about effects rather than semantics is the terrible imprecision confuses new people. They'll often misunderstand it as, "Haskell programs can't perform effects" or they'll think it means stuff in "IO" isn't pure - which is false. We benefit from having a pure functionalal programming language _especially_ in programs that emit effects. Gabriel Gonzalez has a nice article demonstrating some of this: http://www.haskellforall.com/2015/03/algebraic-side-effects.html
When I want to talk about effects, I say "effect". When I want to say something that doesn't emit effects, I say "effect-free" and when it does, "effectful". Sometimes I'll say "in IO" for the latter as well, where "in IO" can be any type that has IO in the outermost position of the final return type.
But, in the end, I'm not really here to convince anybody to use Haskell. I'm working on http://haskellbook.com/ with my coauthor Julie because I thought it was unreasonably difficult and time-consuming to learn a language that is quite pleasant and productive to use in my day to day work. If Haskell picks up in popularity, cool - more libraries! If not, then it remains an uncommon and not well-understood competitive advantage in my work. I'm not sure I mind either outcome as long as the community doesn't contract and it seems to be doing the opposite of that lately.
I use Haskell because I'm lazy and impatient. I do not tolerate tedious, preventable work well. Haskell lets me break down my problems into digestible units, it forces the APIs I consume to declare what chicanery they're up to, it gives me the nicest kit for my work I've ever had at my disposal. It's not perfect - it's best if you're comfortable with a unix-y toolkit, but there's Haskellers on Windows keeping the lights on too.
Best of luck to Abhishek whatever they decide to do from here. I won't pretend Haskell is "easy" - you have to learn more before you can write the typical software project, but it's an upfront cliff sorta thing that converts into a long-term advantage if you're willing to do the work. This is more the case than what I found with Clojure, Erlang, Java, C++, Go, etc. They all have a gentler upfront productivity cliff, but don't pay off nearly as well long-term in my experience. YMMV.
On Fri, Dec 11, 2015 at 3:13 PM, Darren Grant
mailto:dedgrant@gmail.com> wrote: Regarding concurrency+immutability with respect to both reliability and performance:
One way to think about synchronizing concurrent programs is by sharing memory. If the content of that memory changes, then there is a risk of race conditions arising in the affected programs. (A common source of vexing bugs, and complications for compilers.) But if the contents are somehow guaranteed not to change (ie. a specific definition of immutability), then no race conditions are possible for the lifetime of access to that memory.
Although this is a greatly simplified illustrative explanation, it is generally at the heart of arguments for immutability aiding performance. Unchanging regions of memory tend to permit simpler sorts of models since limitations are lifted on synchronization. This in turn allows both more freedom to pursue many otherwise tricky optimizations, such as ex. deciding when to duplicate based on cache geometry, trivially remembering old results, etc.
Regarding the discourse on purely functional programs not having side effects:
Writing pure programs without side effects is a little tricky to talk about, since this has some very precise technical meanings depending on whom you talk to. (What constitutes an effect? Where is the line between intentional and unintentional drawn?)
Maybe think of this statement as part of the continuum of arguments about languages that allow us to write simpler programs that more precisely state the intended effects.
Cheers, Darren
On Dec 11, 2015 07:07, "Abhishek Kumar"
mailto:abhishekkmr18@gmail.com> wrote: I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help. Thanks Abhishek Kumar
_______________________________________________ Beginners mailing list Beginners@haskell.org mailto:Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing list Beginners@haskell.org mailto:Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- Chris Allen Currently working on http://haskellbook.com
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners

On Sat, 12 Dec 2015 01:56:48 +0100, Dimitri DeFigueiredo
Couldn't match expected type ‘r’ with actual type ‘COParseResult’ ‘r’ is a rigid type variable bound by the type signature for getParseResult :: ParseResult r => String -> IO r ...
I have a PhD in computer science, but never really liked programming languages back then and somehow I never learned what a "rigid type variable" is.
See [Haskell-cafe] What is a rigid type variable? https://mail.haskell.org/pipermail/haskell-cafe/2008-June/044622.html :
But would anyone care to explain to a novice in a couple paragraphs why foldl (+) 0 [1..10^9] may take 10 Gigs of RAM to calculate?
The foldl builds up a very long expression and evaluates it after the last element of the list is reached (the evaluation is non-strict, or lazy). If you use foldl' (from Data.List) instead, the calculation is done per element (the evaluation is strict). For more details, see Foldr Foldl Foldl' https://wiki.haskell.org/Foldr_Foldl_Foldl' Lazy vs. non-strict https://wiki.haskell.org/Lazy_vs._non-strict Regards, Henk-Jan van Tuyl -- Folding@home What if you could share your unused computer power to help find a cure? In just 5 minutes you can join the world's biggest networked computer and get us closer sooner. Watch the video. http://folding.stanford.edu/ http://Van.Tuyl.eu/ http://members.chello.nl/hjgtuyl/tourdemonad.html Haskell programming --

The foldl builds up a very long expression and evaluates it after the last element of the list is reached (the evaluation is non-strict, or lazy). If you use foldl' (from Data.List) instead, the calculation is done per element (the evaluation is strict).
Is it possible to write a wrapping function (if it does not already exist) which would analyze inputs and apply appropriate fold (foldl, foldl', foldr, foldr') or safeguard (return Left warning) against following the 10Gb ram route - if this can be avoided?

On Sun, Dec 13, 2015 at 4:02 PM, Imants Cekusins
The foldl builds up a very long expression and evaluates it after the last element of the list is reached (the evaluation is non-strict, or lazy). If you use foldl' (from Data.List) instead, the calculation is done per element (the evaluation is strict).
Is it possible to write a wrapping function (if it does not already exist) which would analyze inputs and apply appropriate fold (foldl, foldl', foldr, foldr') or safeguard (return Left warning) against following the 10Gb ram route - if this can be avoided?
I know next to nothing about Haskell, but I suspect this would require knowing whether a list is finite or infinite, which may be equivalent to "the halting problem" - IOW, not possible in general in a finite amount of time. -- Dan Stromberg

Oops! Sorry, I think I wasn't clear. I know the answers to all the questions I asked. They were rhetorical questions. I just wanted to make a point that learning haskell is *much* harder than learning most other programming languages and the (multitude of) learning aids that are available are not yet cohesive enough to present a clear path ahead for the average programmer. I also think this is the main reason haskell will NOT be more widely used any time soon, despite its many other advantages. I think newcomers to the language should know this before they start to evaluate their reasons and seek help from others (as in this list) to guide them in the process. Thank you very much for the pointers in any case, they look very good. Dimitri On 12/13/15 9:51 PM, Henk-Jan van Tuyl wrote:
On Sat, 12 Dec 2015 01:56:48 +0100, Dimitri DeFigueiredo
wrote: :
Couldn't match expected type ‘r’ with actual type ‘COParseResult’ ‘r’ is a rigid type variable bound by the type signature for getParseResult :: ParseResult r => String -> IO r ...
I have a PhD in computer science, but never really liked programming languages back then and somehow I never learned what a "rigid type variable" is.
See [Haskell-cafe] What is a rigid type variable? https://mail.haskell.org/pipermail/haskell-cafe/2008-June/044622.html
:
But would anyone care to explain to a novice in a couple paragraphs why foldl (+) 0 [1..10^9] may take 10 Gigs of RAM to calculate?
The foldl builds up a very long expression and evaluates it after the last element of the list is reached (the evaluation is non-strict, or lazy). If you use foldl' (from Data.List) instead, the calculation is done per element (the evaluation is strict).
For more details, see Foldr Foldl Foldl' https://wiki.haskell.org/Foldr_Foldl_Foldl'
Lazy vs. non-strict https://wiki.haskell.org/Lazy_vs._non-strict
Regards, Henk-Jan van Tuyl

I have pedagogical questions. Why do you prefer "effect" to "side
effect"? I know that "pure" is misleading to programmers new to
Haskell, but I have thought that "side effect" was more likely to be
self-explanatory.
I also reach for longer phrases like "free from side effects" if I'm
talking to my students.
cheers,
bergey
On 2015-12-11 at 16:36, Christopher Allen
Side bar: although using "pure" with respect to effects is the common usage now, I'd urge you to consider finding a different wording since the original (and IMHO more meaningful) denotation of pure functional programming was about semantics and not the presence or absence of effects. The meaning was that you had a programming language whose semantics were lambda-calculus-and-nothing-more. This can be contrasted with ML where the lambda calculus is augmented with an imperative language that isn't functional or a lambda calculus. Part of the problem with making purity about effects rather than semantics is the terrible imprecision confuses new people. They'll often misunderstand it as, "Haskell programs can't perform effects" or they'll think it means stuff in "IO" isn't pure - which is false. We benefit from having a pure functionalal programming language _especially_ in programs that emit effects. Gabriel Gonzalez has a nice article demonstrating some of this: http://www.haskellforall.com/2015/03/algebraic-side-effects.html
When I want to talk about effects, I say "effect". When I want to say something that doesn't emit effects, I say "effect-free" and when it does, "effectful". Sometimes I'll say "in IO" for the latter as well, where "in IO" can be any type that has IO in the outermost position of the final return type.

Why do you prefer "effect" to "side effect"? FWIW, I say "effect" rather than "side effect" when talking about Haskell, because in Haskell effects happen when you want them, not as an unforeseen side-effect as a result of the complexity inherent to
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 On 12/12/15 14:38, Daniel Bergey wrote: the source code. It is often said that having an effect is "difficult" in Haskell. But really, it's just that if you are launching missiles in Haskell, *you actually mean to*. It didn't happen because you wanted to increment i and then "oops, stuff happened". - -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJWb9WQAAoJENQqWdRUGk8BinMQAIzlf+zL00pjSfu/nkx8W59D IeGbGCWV8zG74963SmCJP70UzamysWs20my+XL7B8UnzbPY/1mKTDK56P/rI+Vbn +YUzWqNWDEuu8g37ATNrOyy99TyX+murkO10KnrYl9aVsOu4IK5in5dR95AKqouC f+NOD6LC29OWX+IfSzGajtmAlRra0yfn7C2x99TktL0+f+GpNHgdaY73SrYeqTaV rV8YF05pnAkHBI7wlXG3b5lwt9Zwhuiy7JLmaSZU1PrZM05/MQBXdiI7ShJafLvm GT8H6RgnnVmudRZYsVKK38mBU+GGiQ2J6VqUXBkCXbrxIB1Z0unNb6puw+nb6xNm 1jDM8DTzJZDD2H+sruZdI/R4wAcviVG79j/Kr6q5uraLkpylXo8w08w4MWKDeAL7 AX8Nv5OrdS9D9Ol9O6I1Tk3UGODQtkso5lo/M1LBT3KX6zCj7b1IZUw51sMLFmfY gOl4oFXG0Sn4+iVWNFE69li8Bx05EI7H/YK3B4hJyftsKsV3upMHIoruN1fHUVO4 kBhX5A676X1EIIWp2WDvix0Tl7F8KM05abD280+bGdDH3GRqKSaew5fpJmhZ7Qc3 Nxad8vMZoNPXaODf/jGIpZ1v4wcbKwwicjD4xZJeB8MQpCcAWkxTm3izoY2ZpzNm 6aO62BltRu2CNYcxYEhW =crfn -----END PGP SIGNATURE-----

good answer! Having worked on a lot of embedded microprocessor systems
over the years... that's exactly the kind of thing you don't want and
sometimes all too easy to do by mistake when writing C or assembler!
Good answer!
:)
On 15 December 2015 at 08:55, Alexander Berntsen
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
Why do you prefer "effect" to "side effect"? FWIW, I say "effect" rather than "side effect" when talking about Haskell, because in Haskell effects happen when you want them, not as an unforeseen side-effect as a result of the complexity inherent to
On 12/12/15 14:38, Daniel Bergey wrote: the source code.
It is often said that having an effect is "difficult" in Haskell. But really, it's just that if you are launching missiles in Haskell, *you actually mean to*. It didn't happen because you wanted to increment i and then "oops, stuff happened".
- -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2
iQIcBAEBCgAGBQJWb9WQAAoJENQqWdRUGk8BinMQAIzlf+zL00pjSfu/nkx8W59D IeGbGCWV8zG74963SmCJP70UzamysWs20my+XL7B8UnzbPY/1mKTDK56P/rI+Vbn +YUzWqNWDEuu8g37ATNrOyy99TyX+murkO10KnrYl9aVsOu4IK5in5dR95AKqouC f+NOD6LC29OWX+IfSzGajtmAlRra0yfn7C2x99TktL0+f+GpNHgdaY73SrYeqTaV rV8YF05pnAkHBI7wlXG3b5lwt9Zwhuiy7JLmaSZU1PrZM05/MQBXdiI7ShJafLvm GT8H6RgnnVmudRZYsVKK38mBU+GGiQ2J6VqUXBkCXbrxIB1Z0unNb6puw+nb6xNm 1jDM8DTzJZDD2H+sruZdI/R4wAcviVG79j/Kr6q5uraLkpylXo8w08w4MWKDeAL7 AX8Nv5OrdS9D9Ol9O6I1Tk3UGODQtkso5lo/M1LBT3KX6zCj7b1IZUw51sMLFmfY gOl4oFXG0Sn4+iVWNFE69li8Bx05EI7H/YK3B4hJyftsKsV3upMHIoruN1fHUVO4 kBhX5A676X1EIIWp2WDvix0Tl7F8KM05abD280+bGdDH3GRqKSaew5fpJmhZ7Qc3 Nxad8vMZoNPXaODf/jGIpZ1v4wcbKwwicjD4xZJeB8MQpCcAWkxTm3izoY2ZpzNm 6aO62BltRu2CNYcxYEhW =crfn -----END PGP SIGNATURE----- _______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners

Why do you prefer "effect" to "side effect"? FWIW, I say "effect" rather than "side effect" when talking about Haskell, because in Haskell effects happen when you want them, not as an unforeseen side-effect as a result of the complexity inherent to
A good and witty answer :)
On Tuesday, 15 December 2015, 9:16, emacstheviking

Am 12/15/2015 um 09:55 AM schrieb Alexander Berntsen:
FWIW, I say "effect" rather than "side effect" when talking about Haskell, because in Haskell effects happen when you want them, not as an unforeseen side-effect as a result of the complexity inherent to the source code.
It is often said that having an effect is "difficult" in Haskell. But really, it's just that if you are launching missiles in Haskell, *you actually mean to*. It didn't happen because you wanted to increment i and then "oops, stuff happened".
What is the exact defintion of "effect". Everybody talks about it but I am certainly unable to give a defintion.

What is the exact defintion of "effect".
let's try: effect: A change which is a consequence of an action (in this case, function call) side effect: change of environment state which is a consequence of an action (function call) pure function: calling this function does not affect environment state function returns a value, that's all I am not sure if function running inside e.g. state monad and modifying this monad's state is pure, i.e. if state monad is environment

There is no exact definition of "effect" so this discussion must
necessarily be vague and probably not very enlightening. The State Monad
may or may not have effects, depending on your definition, but it is
definitely pure.
On Wed, Dec 23, 2015 at 5:46 AM Imants Cekusins
What is the exact defintion of "effect".
let's try:
effect: A change which is a consequence of an action (in this case, function call)
side effect: change of environment state which is a consequence of an action (function call)
pure function: calling this function does not affect environment state function returns a value, that's all
I am not sure if function running inside e.g. state monad and modifying this monad's state is pure, i.e. if state monad is environment _______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners

Incidentally, IO is pure too
On Fri, Dec 25, 2015 at 8:19 PM, Rein Henrichs
There is no exact definition of "effect" so this discussion must necessarily be vague and probably not very enlightening. The State Monad may or may not have effects, depending on your definition, but it is definitely pure.
On Wed, Dec 23, 2015 at 5:46 AM Imants Cekusins
wrote: What is the exact defintion of "effect".
let's try:
effect: A change which is a consequence of an action (in this case, function call)
side effect: change of environment state which is a consequence of an action (function call)
pure function: calling this function does not affect environment state function returns a value, that's all
I am not sure if function running inside e.g. state monad and modifying this monad's state is pure, i.e. if state monad is environment _______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- Chris Allen Currently working on http://haskellbook.com

for some practical purposes, would it make sense to forget about effects and purity and distinguish only between IO and non-IO? It should be easy enough to tell IO from non-IO, no? also, could we say that a function that returns a value of such a type of which no part is a function (for lack of a better definition) is definitely a pure function ?

TL;DR: Evaluation is always pure[1]. Execution isn't, but it wasn't supposed to be in the first place.
would it make sense to forget about effects and purity and distinguish only between IO and non-IO? It should be easy enough to tell IO from non-IO, no?
There is no need to distinguish between IO and non-IO. Everything is pure, including IO. When people talk about effects, they tend to mean something encapsulated and "hidden" by the implementation of bind and return for a particular Monad instance; the meaning of "effect" is entirely dependent on this context. For example, State encapsulates the effect of passing an accumulating parameter to multiple functions and properly threading the results. Reader encapsulates additional function parameters that are never varied. Writer encapsulates writing to a "log" (any monoidal accumulator). The list monad encapsulates the effect of non-determinism (functions that can return more than one result), allowing you to work with placeholder variables that represent all possible results at that stage of the computation. ST encapsulates mutation in way that is *externally unobservable*. And so on. None of these "effects" are impure. They are all referentially transparent. They do not mutate any external or global state. They can all be rewritten by inlining the relevant definitions of bind and return in a way that will make it obvious that no "funny stuff" is happening. One important difference between this sort of encapsulation and the kind that you might find in an OOP language is that this encapsulation is *perfect*. These abstractions *do not leak*. There is no way for your program to externally observe any mutation during evaluation of a State action and the same holds, mutatis mutandis, for all the other monad instances. IO is a common source of confusion, but the important distinction here is the one that Chris already made: *evaluation* of IO actions is pure, referentially transparent, and causes no effects (side- or otherwise). Execution of the `main` IO action by the runtime—and by extension the execution of those other IO actions that may compose it—is obviously not pure, but no one is expecting *execution* to be pure: if execution were required to be pure then the program couldn't be run at all, because any attempt to run it would cause some sort of externally observable effect (even if it merely heats up the surrounding space a bit). A commonly used metaphor that may help to understand this is to consider an IO action to be like a recipe like one might find in a cookbook. If `getLine` is the recipe for a particular type of cake then it will be the same recipe every time it is invoked. The actual cake that you produce when you execute the recipe may differ—more or less, depending on how proficient you are at baking—but this does not mean that the recipe itself has changed each time. And so it is with IO: The actions are pure. They are the same every time. The results that they produce when executed may change, but this is not at odds with our claim that the values themselves are pure.
also, could we say that a function that returns a value of such a type of which no part is a function (for lack of a better definition) is definitely a pure function
Yes, and all the other functions are pure too.
[1] Modulo usages of `unsafePerformIO` and friends, but these can and
should be dealt with separately.
On Sat, Dec 26, 2015 at 4:09 AM Imants Cekusins
for some practical purposes, would it make sense to forget about effects and purity and distinguish only between IO and non-IO? It should be easy enough to tell IO from non-IO, no?
also, could we say that a function that returns a value of such a type of which no part is a function (for lack of a better definition) is definitely a pure function
? _______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners

They can all be rewritten by inlining the relevant definitions of bind and return in a way that will make it obvious that no "funny stuff" is happening.
This is actually a bit suspect for ST, since it involves uses of the unsafe
functions mentioned in my footnote. The argument is that these functions
are being used in a provably safe way, but the proof cannot be executed in
Haskell: it must be executed elsewhere and ported into Haskell, but it is
still valid (unless the implementation is incorrect). This is an exemplary
usage of unsafe functions to provide *safe* features where the implementor
has satisfied this proof of safety obligation elsewhere and doesn't require
Haskell's type system to prove it for them; and where they otherwise
wouldn't able to provide this functionality or this performance
optimization or etc.
On Sat, Dec 26, 2015 at 10:53 AM Rein Henrichs
TL;DR: Evaluation is always pure[1]. Execution isn't, but it wasn't supposed to be in the first place.
would it make sense to forget about effects and purity and distinguish only between IO and non-IO? It should be easy enough to tell IO from non-IO, no?
There is no need to distinguish between IO and non-IO. Everything is pure, including IO.
When people talk about effects, they tend to mean something encapsulated and "hidden" by the implementation of bind and return for a particular Monad instance; the meaning of "effect" is entirely dependent on this context. For example, State encapsulates the effect of passing an accumulating parameter to multiple functions and properly threading the results. Reader encapsulates additional function parameters that are never varied. Writer encapsulates writing to a "log" (any monoidal accumulator). The list monad encapsulates the effect of non-determinism (functions that can return more than one result), allowing you to work with placeholder variables that represent all possible results at that stage of the computation. ST encapsulates mutation in way that is *externally unobservable*. And so on. None of these "effects" are impure. They are all referentially transparent. They do not mutate any external or global state. They can all be rewritten by inlining the relevant definitions of bind and return in a way that will make it obvious that no "funny stuff" is happening.
One important difference between this sort of encapsulation and the kind that you might find in an OOP language is that this encapsulation is *perfect*. These abstractions *do not leak*. There is no way for your program to externally observe any mutation during evaluation of a State action and the same holds, mutatis mutandis, for all the other monad instances.
IO is a common source of confusion, but the important distinction here is the one that Chris already made: *evaluation* of IO actions is pure, referentially transparent, and causes no effects (side- or otherwise). Execution of the `main` IO action by the runtime—and by extension the execution of those other IO actions that may compose it—is obviously not pure, but no one is expecting *execution* to be pure: if execution were required to be pure then the program couldn't be run at all, because any attempt to run it would cause some sort of externally observable effect (even if it merely heats up the surrounding space a bit).
A commonly used metaphor that may help to understand this is to consider an IO action to be like a recipe like one might find in a cookbook. If `getLine` is the recipe for a particular type of cake then it will be the same recipe every time it is invoked. The actual cake that you produce when you execute the recipe may differ—more or less, depending on how proficient you are at baking—but this does not mean that the recipe itself has changed each time. And so it is with IO: The actions are pure. They are the same every time. The results that they produce when executed may change, but this is not at odds with our claim that the values themselves are pure.
also, could we say that a function that returns a value of such a type of which no part is a function (for lack of a better definition) is definitely a pure function
Yes, and all the other functions are pure too.
[1] Modulo usages of `unsafePerformIO` and friends, but these can and should be dealt with separately.
On Sat, Dec 26, 2015 at 4:09 AM Imants Cekusins
wrote: for some practical purposes, would it make sense to forget about effects and purity and distinguish only between IO and non-IO? It should be easy enough to tell IO from non-IO, no?
also, could we say that a function that returns a value of such a type of which no part is a function (for lack of a better definition) is definitely a pure function
? _______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Rein explains effects well enough. Now for side-effects: a side-effect is what is allowed to happen in the absence of a weak equivalence between call-by-name (including memoised versions) and call-by-value[0]. Let's play "spot the side-effect!" - -- Haskell function for adding two numbers. f x y = x + y - -- function in a language that happens to be syntactically like Haskell. f x y = print "Hallo!" >> x + y - -- program to evaluate the function. - -- it just happens to be equal in Haskell and the Haskell-like language. main = let a = f 4 2 in print a >> print a evaluation-strategy | call-by-need | call-by-value - ------------------------------------------------------------- x + y | 6\n6 | 6\n6 print "Hallo" >> x + y | Hallo!\n6\n6 | Hallo!\n6\nHallo!\n6 See how the latter function differs in call-by-need and call-by-value? This is because it has a *side-effect* of printing stuff. This is not allowed in Haskell. And in fact, the latter function is impossible in Haskell. As Rein was getting at by abstraction leaks -- side-effects have an observable interaction with the outside world. [0] https://www.cs.indiana.edu/~sabry/papers/purelyFunctional.ps - -- Alexander alexander@plaimi.net https://secure.plaimi.net/~alexander -----BEGIN PGP SIGNATURE----- Version: GnuPG v2 iQIcBAEBCgAGBQJWgmfhAAoJENQqWdRUGk8BZfkQAOft2e7RmRheu0ZGAXXj3P4m 4zPnJ5SQ//+TTM2rzH1b6ifoidGgEhKiyDr2IajXbj0wIbPLXqORByXUFx6E6I1q rz9rslEb2XGwxTOm/Wi90IGO6niEPO1+/WCFCLE2G+Fh2M3uo381djlQfaS1MVla 1UtzElnX2bnpLfjeSjMFFB2joEuCCl8Gz+QXVGIX1H3BE+xSyM+vllDtkwob/6Bn OyPckgGyiQlt5RpPoBsdEBU5qzrT5yJWaRWeiRIg293XrOPls3kM0GvhvaAFXmWr 1A/L391BQtw65xeGTlPArCi+xemwVsgwK2hdQWPmTrpJATveV0vN1OhQMkiI5b+Q 5SfKo0MKKWezRu7avIfaJ0IUB5Pl/FM+IhMFoHWM4oE5ixh7Vr91nslhOMjvAWgR GzKyMuY63CLoy/1He5nNoCJ2Pjwdf65lUOD/sNJhjLbd2qw220UFE4L2SOLXLX5U uBY1GF3C+biH2ai7utKU9RBXyV7p5dVcV4vft+Eb118QJmp3fFP26HS6IwuD3V8q JNWAo7ZTK5vpujUHtee2J1ltHrleSlVaaJE0ONDzKDzi7QrwUAUbae3/Wnzy5zBF ZXSPqABh8MNjkItax5zQwHyvgcJTRNtt7xj41+d7WKscgy5XvDQC3/RSQPXYZ9Ib upR6CDDSP2bns7KfRDU0 =AGvd -----END PGP SIGNATURE-----

Dude... it's the Haskell *Beginners* List.
STM, I guess, is Shared Transactional Memory.
But OTP? Wazzat?
On Fri, Dec 11, 2015 at 4:36 PM, Christopher Allen
Having the option of communicating by sharing memory, message-passing style (copying), or copy-on-write semantics in Haskell programs is why I've found Haskell to be a real pleasure for performance (latency, mostly) sensitive services I frequently work on. I get a superset of options in Haskell for which I don't have any one choice that can really match it in concurrency problems or single-machine parallelism. There's some work to be done to catch up to OTP, but the community is inching its way a few directions (Cloud Haskell/distributed haskell, courier, streaming lib + networking, etc.)
Generally I prefer to build out services in a conventional style (breaking out capacities like message backends or persistence into separate machines), but the workers/app servers are all in Haskell. That is, I don't try to replicate the style of cluster you'd see with Erlang services in Haskell, but I know people that have done so and were happy with the result. Being able to have composable concurrency via STM without compromising correctness is _no small thing_ and the cheap threads along with other features of Haskell have served to make it so that concurrency and parallelization of Haskell programs can be a much more modular process than I've experienced in many other programming languages. It makes it so that I can write programs totally oblivious to concurrency or parallelism and then layer different strategies of parallelization or synchronization after the fact, changing them out at runtime if I so desire! This is only possible for me in Haskell because of the non-strict semantics and incredible kit we have at our disposal thanks to the efforts of Simon Marlow and others. Much of this is ably covered in Marlow's book at: http://chimera.labs.oreilly.com/books/1230000000929
Side bar: although using "pure" with respect to effects is the common usage now, I'd urge you to consider finding a different wording since the original (and IMHO more meaningful) denotation of pure functional programming was about semantics and not the presence or absence of effects. The meaning was that you had a programming language whose semantics were lambda-calculus-and-nothing-more. This can be contrasted with ML where the lambda calculus is augmented with an imperative language that isn't functional or a lambda calculus. Part of the problem with making purity about effects rather than semantics is the terrible imprecision confuses new people. They'll often misunderstand it as, "Haskell programs can't perform effects" or they'll think it means stuff in "IO" isn't pure - which is false. We benefit from having a pure functionalal programming language _especially_ in programs that emit effects. Gabriel Gonzalez has a nice article demonstrating some of this: http://www.haskellforall.com/2015/03/algebraic-side-effects.html
When I want to talk about effects, I say "effect". When I want to say something that doesn't emit effects, I say "effect-free" and when it does, "effectful". Sometimes I'll say "in IO" for the latter as well, where "in IO" can be any type that has IO in the outermost position of the final return type.
But, in the end, I'm not really here to convince anybody to use Haskell. I'm working on http://haskellbook.com/ with my coauthor Julie because I thought it was unreasonably difficult and time-consuming to learn a language that is quite pleasant and productive to use in my day to day work. If Haskell picks up in popularity, cool - more libraries! If not, then it remains an uncommon and not well-understood competitive advantage in my work. I'm not sure I mind either outcome as long as the community doesn't contract and it seems to be doing the opposite of that lately.
I use Haskell because I'm lazy and impatient. I do not tolerate tedious, preventable work well. Haskell lets me break down my problems into digestible units, it forces the APIs I consume to declare what chicanery they're up to, it gives me the nicest kit for my work I've ever had at my disposal. It's not perfect - it's best if you're comfortable with a unix-y toolkit, but there's Haskellers on Windows keeping the lights on too.
Best of luck to Abhishek whatever they decide to do from here. I won't pretend Haskell is "easy" - you have to learn more before you can write the typical software project, but it's an upfront cliff sorta thing that converts into a long-term advantage if you're willing to do the work. This is more the case than what I found with Clojure, Erlang, Java, C++, Go, etc. They all have a gentler upfront productivity cliff, but don't pay off nearly as well long-term in my experience. YMMV.
On Fri, Dec 11, 2015 at 3:13 PM, Darren Grant
wrote: Regarding concurrency+immutability with respect to both reliability and performance:
One way to think about synchronizing concurrent programs is by sharing memory. If the content of that memory changes, then there is a risk of race conditions arising in the affected programs. (A common source of vexing bugs, and complications for compilers.) But if the contents are somehow guaranteed not to change (ie. a specific definition of immutability), then no race conditions are possible for the lifetime of access to that memory.
Although this is a greatly simplified illustrative explanation, it is generally at the heart of arguments for immutability aiding performance. Unchanging regions of memory tend to permit simpler sorts of models since limitations are lifted on synchronization. This in turn allows both more freedom to pursue many otherwise tricky optimizations, such as ex. deciding when to duplicate based on cache geometry, trivially remembering old results, etc.
Regarding the discourse on purely functional programs not having side effects:
Writing pure programs without side effects is a little tricky to talk about, since this has some very precise technical meanings depending on whom you talk to. (What constitutes an effect? Where is the line between intentional and unintentional drawn?)
Maybe think of this statement as part of the continuum of arguments about languages that allow us to write simpler programs that more precisely state the intended effects.
Cheers, Darren On Dec 11, 2015 07:07, "Abhishek Kumar"
wrote: I am a beginner in haskell.I have heard a lot about haskell being great for parallel programming and concurrency but couldn't understand why?Aren't iterative algorithms like MapReduce more suitable to run parallely?Also how immutable data structures add to speed?I'm having trouble understanding very philosophy of functional programming, how do we gain by writing everything as functions and pure code(without side effects)? Any links or references will be a great help. Thanks Abhishek Kumar
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
-- Chris Allen Currently working on http://haskellbook.com
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners


Thanks.
On Mon, Dec 14, 2015 at 12:24 PM, Imants Cekusins
Erlang OTP
https://en.m.wikipedia.org/wiki/Open_Telecom_Platform
_______________________________________________ Beginners mailing list Beginners@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/beginners
participants (20)
-
Abhishek Kumar
-
Alexander Berntsen
-
Christopher Allen
-
Dan Stromberg
-
Daniel Bergey
-
Darren Grant
-
derek riemer
-
Dimitri DeFigueiredo
-
emacstheviking
-
Henk-Jan van Tuyl
-
Imants Cekusins
-
Jeffrey Brown
-
John Lusk
-
martin
-
mike h
-
MJ Williams
-
Rein Henrichs
-
Sanatan Rai
-
Sumit Sahrawat, Maths & Computing, IIT (BHU)
-
Thomas Jakway