
Hi Janek,
yes to both -- in a way. See Section 5.3 here for lists:
http://dl.acm.org/citation.cfm?id=2543736
For my usual work, I use stream fusion and manually 'flatten' everything
in all of ADPfusion and rather large bunch of other work building on top
of that. ;-)
Giegerich's original ADP is full or list comprehensions -- every single
function uses them, and does not require a lot of additional machinery
to run.
http://bibiserv.techfak.uni-bielefeld.de/adp/
Note that if you want to introduce deep optimizations, it'll be a larger
project. See also Coutts' phd thesis ([2] in our paper), and the
original stream fusion paper [3].
Gruss,
Christian
===
[2] D. Coutts. Stream Fusion: Practical Shortcut Fusion for Coinductive
Sequence Types. PhD thesis, University of Oxford, 2010.
[3] D. Coutts, R. Leshchinskiy, and D. Stewart. Stream fusion: From
lists
to streams to nothing at all. In Proceedings of the 12th ACM SIGPLAN
International Conference on Functional Programming, pages 315–
326, Freiburg, Germany, 2007. ACM.
* Jan Stolarek
Haskellers,
recently I've been looking into the possibility of creating some new optimisations for GHC. These would be mostly aimed at list comprehensions. Here's where I need your help:
1. Do you have complex list comprehensions usage examples from real code? By complex I mean nested list comprehensions, reading from more than one list ([ ...| x <- xs, y <- ys ... ]) etc.
2. Do you have list comprehensions code that you had to optimize by hand because GHC was unable to make them fast enough?
Janek _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users