
Achim Schneider wrote:
To get a bit more on-topic: I currently completely fail to implement a layout rule in Parsec because I don't understand its inner workings, and thus constantly mess up my state. Parsec's ease of usage is deceiving as soon as you use more than combinators: Suddenly the plumbing becomes important, and hackage is full of such things. Haskell has potentially infinite learning curves, and each one of them usually represents a wall. To make them crumble, you have to get used to not understand anything until you understand everything.
A big component of this is just that a high level of abstraction is involved. Something similar occurs in other languages, for programs that are written in a very abstract way. Some frameworks in e.g. Smalltalk, Java, or C++ are an example of this: full of classes whose domain is mainly internal to the framework, and you have to understand the framework's design principles in their full generality in order to be able to really understand the code. As a more concrete example related to Parsec, consider a generator of table-driven parsers written in C, and compare this to writing a recursive-descent parser directly. The code for the parser generator is completely impenetrable for someone who isn't familiar with the theory behind it, so if they want to change the generator's behavior, they're likely to be stuck. Whereas for a recursive descent parser for a single language, it's much easier to map between the ultimate application goals, and how those are accomplished in the code, without much special knowledge. Of course there are pros and cons on either side. One reason that DSLs work well is that when done right, so that abstraction leakage is minimal, they can insulate users from having to understand the underlying system. Embedded DSLs, like Parsec, seem more likely to suffer from problems in this area, although in that case the tradeoff is that you're getting to use them directly in a general-purpose language. Anton