
Am Donnerstag 18 März 2010 21:57:34 schrieb Daniel Fischer:
Contrary to my expectations, however, using unboxed arrays is slower than straight arrays (in my tests).
However, a few {-# SPECIALISE #-} pragmas set the record straight. Specialising speeds up both, boxed and unboxed arrays, significantly, but now, for the specialised types, unboxed arrays are faster (note, however, that when the code for the binary search is in the same module as it is used, with optimisations, GHC will probably specialise it itself. If binarySearch is not exported, AFAIK, you can delete "probably".). {-# LANGUAGE BangPatterns #-} module SATBinSearch (binarySearch) where import Data.Array.IArray import Data.Array.Base (unsafeAt) import Data.Bits {-# SPECIALISE binarySearch :: Double -> Array Int Double -> Int #-} {-# SPECIALISE binarySearch :: Int -> Array Int Int -> Int #-} {-# SPECIALISE binarySearch :: Bool -> Array Int Bool -> Int #-} {-# SPECIALISE binarySearch :: Char -> Array Int Char -> Int #-} {-# SPECIALISE binarySearch :: Float -> Array Int Float -> Int #-} binarySearch :: Ord a => a -> Array Int a -> Int binarySearch q a = go l h where (l,h) = bounds a go !lo !hi | hi < lo = -(lo+1) | otherwise = case compare mv q of LT -> go (m+1) hi EQ -> m GT -> go lo (m-1) where -- m = lo + (hi-lo) `quot` 2 m = (lo .&. hi) + (lo `xor` hi) `shiftR` 1 mv = a `unsafeAt` m Use Data.Array.Unboxed and UArray if possible. Now the bit-fiddling instead of arithmetics makes a serious difference, about 20% for unboxed arrays, 17% for boxed arrays (Double), so I'd recommend that.