
On Sat, Apr 03, 2004 at 07:04:18PM +0200, Henning Thielemann wrote:
What is the most promising strategy to speed up the computation without loosing the elegance of Haskell's lists? 1. Upgrading GHC from 6.0 to the current version? 2. Use of simplification rules? How can I find out what new rules may be useful? The several dumps that GHC can generate are a bit hard to understand. :-( Is there some more detailed information about what rules GHC already applies than what one can get with -ddump-simpl-stats and -ddump-rules? 3. More strictness? Is a custom list structure with strict entries and maybe with only one constructor (thus always infinite) a considerable alternative? This would disallow the use of lots of functions that deal with lists, including State monad functions. 4. A list of arrays? Then signal processes with feedback like an echo are much more difficult.
Try using profiling to see where exactly your time is being spent, and then perhaps posting back here with the problematic function. Also, of course, looking at scaling is key, make you know the scaling of each function (i.e. how many times it gets called), and that there isn't anything you can do to fix the scaling. Lists are slow, and laziness is slow, but on the other hand, computers are fast, so before trying any of 1-4 (all of which seem likely to uglify your code a bit) I'd try to see if you can fix up the algorithms in your code to be more efficient. I assume that if your highest priority was writing blindingly fast code, you'd be writing in C. I guess one thing you could try (if you haven't already done so) is writing a simple program that just reads and writes and does nothing in between (except convert to a lazy list), to see how slow that is. Is laziness really an important issue? Are you interested in it mostly just for memory consumption reasons, to avoid holding an entire file in memory, or are you perhaps thinking to make the code work with real-time input? If you decide you need something faster (but still pretty) and don't need laziness and can fit your data in memory, I think an infinite array might be the way to go. You could pretty easily create a wrapper around a UArray or a ForeignPtr which would pad it with zeros in both plus and minus infinity directions. Then you could also create a "time offset" function that shifts an array by a given amount (without making a copy), and could probably do most of what you'd like to do. I imagine something like data InfiniteArray = IA !(ForeignPtr Int16) !Int !Int (!) :: InfiniteArray -> Int -> Int16 (IA for_ptr start_index end_index the_offset) ! i | i < start_index = 0 | i > end_index = 0 | otherwise = unsafePerformIO $ withForeignPtr for_ptr $ \p -> peekElemOff p (the_offset + i) createIA :: Int -> (Ptr Word16 -> IO ()) -> InfiniteArray createIA length write_ptr = unsafePerformIO $ do fp <- mallocForeignPtr length withForeignPtr fp $ \p -> write_ptr p return $ IA fp 0 (l-1) 0 offsetIA :: Int -> InfiniteArray -> InfiniteArray offsetIA offset_by (IA start_index end_index the_offset) = IA (start_index + offset_by) (end_index + offset_by) (the_offset - offset_by) Note that I only used ForeignPtr's because I'm familiar with them, and haven't used UArrays much. Unless you need to use FFI, you're probably better off with UArrays. On the other hand, with ForeignPtr's you leave yourself open to the possibility of calling an FFT program through the FFI, and you definitely *don't* want to write an FFT in haskell. -- David Roundy http://www.abridgegame.org