
Am 05.12.2015 um 02:01 schrieb Dimitri DeFigueiredo:
Indeed. I find laziness and the non-composable nature of space complexity in Haskell to be a much harder beast to deal with than immutability.
I can imagine. Immutability can be dealt with by applying a few principles in the key points, and everything will work out; space complexity is riddled with exceptions. OTOH space and time complexity are non-composable (well, not trivially composable anyway) even in a strict language. As software grows larger, people start doing all kinds of deferred evaluation strategies, and the problems crop up - with the added complication that the software is already so large that the root cause of some complexity problem is hidden behind lots of tiny details. So, I currently believe that Haskell is just showcasing the problem early. This is worth something, but I guess not for that majority of programmers who're doing business logic for websites, or for those just learning to code in Haskell and tripping over space complexity.
There is an *excellent* introduction to the basics of lazy evaluation in Graham Hutton's book /Programming in Haskell/ (chapter 12), but I don't know of any good references beyond that basic level. Let us know if you find some!
Sumit gave a link to backhands.com which has turned out to be pretty deep and accessible. Unfortunately, that link still does not offer a systematic approach to keeping tabs about complexity. So... the main question is still open. I hope somebody can shed insight about how to attack this. What do people do if they find their Haskell code to perform badly - do they all resort to ad-hockery, or is there a systematic approach that will either give insight what to change where? Even an approach that may fail is better than just ad hoc I think, as long as it succeeds often enough to be useful. Regards, Jo