
I *think* I understand about lazy evaluation and its effects on I/O and that it can internally create thunks, promises, continuations etc. whatever you want to call them, and then at some point there will be a sudden spike in memory and CPU loads as something triggers a whole chain of promises to be fulfilled but what I don't have a feel for is just how serious a problem that might be today in say, a simple desktop application. For example, let's just say I write a simple application in 'C' and a comparable one in Haskell both running on the same hardware, but not concurrently! Would the Haskell one ever find itself running out of memory given the same input sequence as the 'C' program because of the way it works ? I know that the 'C' one could also run out of memory but I hope that the sentiment of my question is clear; what are the ramifications, gotchas and other considerations that using a purely functional lazy language like Haskell you need to be aware of when coming from an imperative / scripted background ? Notwithstanding the fact that there are strict flavours of functions that can be used.... but only if you are truly aware of the reasons for their existence in the first place, is this a real design issue to consider when coding an application in Haskell or is it only for certain 'groups' of applications. I guess the nature of the application would be the governing factor. Is it something one needs to worry about at all or should one just code away and write the application and then worry!? I think that having a clearer understanding of what 'types' of problems and their implementations have on CPU/RAM would be a good one to have! :) Thanks Sean Charles.