
And note we are pushing precisely on the use of DSLs in or on Haskell for *portability* of the domain-scientists code in a number of areas right now: * data parallel algorithms (targetting cpu , gpu) Accelerate: http://hackage.haskell.org/package/accelerate-0.6.0.0 Obsidian http://www.cse.chalmers.se/~joels/writing/dccpaper_obsidian.pdf * control systems code: Atom: http://hackage.haskell.org/package/atom * cryptography Cryptol: http://www.galois.com/technology/communications_security/cryptol * avionics verification: http://www.galois.com/blog/2009/05/15/edsls-for-unmanned-autonomous-verifica... * financial modelling Paradise: http://www.londonhug.net/2008/08/11/video-paradise-a-dsel-for-derivatives-pr... FPF: http://lambda-the-ultimate.org/node/3331 * operating systems: http://www.barrelfish.org/fof_plos09.pdf In all cases we're looking at high level code, the possibility of multiple backends, and constrained semantics enabling extensive optimization and analysis. And -- we're generating code, so there's no benefit to having the language hosted on the JVM or .NET -- Haskell should *own* this space. This may be Haskell's killer app now that DSLs are going mainstream. We have mature technology for good DSLs. Far more resources than Scala. Why isn't Haskell completely dominating this space? I believe it is lack of training and outreach. We need a "Write you a DSL for great good!" -- Don dpiponi:
Yesterday I was at a talk by Pat Hanrahan on embedded DSLs and GPUs at the nvidia GPU conference: http://www.nvidia.com/object/gpu_technology_conference.html
Pat argued that the only way forward to achieve usable computing power for physics on heterogeneous computers (eg. multicore+GPU) is through the use of embedded DSLs to allow physicists to express algorithms without reference to underlying architecture, or even details like data structures.