
most widely-used programs (ex: web browsers, word processors, email programs, data bases, IDEs) tend to be 90% IO and 10% (or less) computation. This can make Haskell quite unweildy for solving these types of problems. On the otherhand, writing something like a compiler (which requires a small amount of IO - read a file(s), write results to a file - and a large amount of "unseen" computation - generating and optimizing code) is right up Haskell's alley.
hey, compilers do nothing but IO! they read sources, print error messages or write object files and executables. there is really no need to do any "unseen" computation at all! unless there is some visible effect on the file system or programmer console, any such "unseen" computation is wasted, and should be optimised away. .. just kidding, of course?-) if compilers seem more suitable for haskell, it may be because they have been studied for a long time, and while throughput is important, noone is likely to argue that the core of compilation is about reading or writing files. the focus in writing compilers is on designing, implementing and manipulating the internal representations of the source and object code represented externally as strings of chars and bytes. applying the same reasoning to your "most widely-used programs", we could say that their theory hasn't reached the same level as that of compilers (and so i'd remove data bases from your list, and editors are also reasonably well-understood). once their internal representations are better understood, programming again focusses on working with these internal representations (often called models), while IO reduces to a straightforward mapping from and to those internal representations that are closest to the external ones. claus