On 4/13/07, Steffen Mazanek <haskell@steffen-mazanek.de> wrote:
Hello everybody,

I would like to start a discussion on how to generate
best-practice Haskell code from a model, e.g. from
EMF.


I started learning Haskell precisely to solve problems like this. But, once I got into it, I realized that Haskell is a much better modeling language than the modeling language I was using (MOF/UML, the predecessors to EMF). Furthermore, all the infrastructure built on top of that modeling language was very easy to replace with Haskell code. As a result, I gave up that effort.

You said "The benefits of the model+generate approach are well known," however I disagree. W3C DOM, MOF, UML, CORBA, and NetBeans 3.x-4.x are all obvious examples of the failure of the model+generate approach. If the modeling language is sufficiently powerful, then it should be feasible to execute the models directly using a (custom-built) interpreter. If the modeling language is weak then it is better to just do the modeling in Haskell or another more powerful language.

The MDA idea was that you would have one model and then be able to use that model in a variety of different programming languages, without having to rewrite code in each target language. Now, people are getting this benefit via a "code, then translate" approach. For example, GWT allows the developer to write Java code, then generate the equivalent Javascript, without any hand-wavy models in between. JRuby lets one write code in Ruby to be used by code in Java; IronPython does the same for other .NET languages. In fact, C# is basically the .NET counterpart to EMF.

FWIW, I also think that data based languages like ERD, Relax NG, and XQuery/XPath/XML Schema are a much closer fit to Haskell than EMF. EMF is designed to be translated any object-oriented, class-based, (soley) subtype-polymorphic, single-dispatched, single-inheritance language; i.e. Java. In fact, EMF is really a Java-optimized subset of what was supposed to become part of MOF 2.0.

- Brian