
ajb@spamcop.net wrote:
G'day all.
Quoting wren ng thornton
: Most of the (particular) problems OO design patterns solve are non-issues in Haskell because the language is more expressive.
...and vice versa. Some of the "design patterns" that we use in Haskell, for example, are to overcome the fact that Haskell doesn't have mutable global state.
Oh sure. However, in my experience the most common design patterns of OO are the ones obviated in functional languages. And I haven't tended to need any of the global mutation patterns in Haskell[1]. Not an objective analysis by any means, but I do think it holds more water than it leaks. The whole genre of SYB papers indicates that there's no panacea. [1] The two times I've needed global mutation, one is for arbitrary unique symbol generation (uses a library), and the other is for doing some very tricky memoization with self-compaction when memory is low. The latter I'd think is certainly too special-purpose to be considered a "pattern".
A number of other patterns can actually be written down once and for all (in higher-order functions like foldr, map,...) instead of needing repetition.
This is also true in many OO languages. A lot of the GoF book, for example, can be implemented as libraries in Ada or C++.
I think this depends very much on the specific language in question. For dynamic OO languages with lambdas (Smalltalk, Ruby, Python, Perl) it's easy, but then it's basically the same thing. For languages with a way of doing mixins (the aforementioned, C++) it's also pretty easy. But for languages like Java, oftentimes it's a choice between impossible and grossly inefficient. I've been dealing with a lot of Java lately. Certainly it's _possible_ to do in any language (Turing tarpit et al.), but the question is one of how much boilerplate is involved and how efficient it is. I think the latter point is important not only for the projects I've worked on, but also for how widely adopted any given pattern becomes; many people are fastidious about performance. The former is also important to whether it gets spread as a "pattern" or whether it gets packaged up in a library somewhere.
And then there are some things like monoids which fall somewhere between idiom and pearl.
"Things like monoids" are constructions from algebra.
Abstract algebra and design patterns have a lot in common. They're based on the same idea, in fact: When a pattern keeps showing up, define it and give it a name so you can talk about it independently of any specific implementation.
Or to put it another way, category theory is the pattern language of mathematics.
Indeed. Though, IMO, there's a distinction between fairly banal things (e.g. monoids), and the other more interesting bits of category theory and abstract algebra. Monoids often occur by happenstance and so their triviality lends to being more like an idiom. Similarly, functors also tend to just happen. However, once you start getting into applicative functors, natural transformations, and the like, you've made a step from incidentally using a ubiquitous pattern of mathematics, to making a concerted effort at abstraction. -- Live well, ~wren