
I wrote a pretty efficient skew binomial heap implementation -- the first step of Okasaki's approach -- and found it was already slower than the optimized binomial heap. I'm not sure whether or not I uploaded that benchmark, though. I'll do that at some point today, just to keep everyone happy.
The skew binomial heaps you implemented should only be asymptotically faster than the usual binomial heaps on one special case: comparing a binomial heap in a state that would case an \Omega(lg n) time cascade on insert to the worst-case O(1) insert of binomial heaps. I think it would also be worth comparing binomial heap merge against Brodal-Okasaki heap merge. Finally, if speed is the ultimate goal, I suspect the clever nested type approach to guaranteeing binomial tree shape slows things down a bit. For reference, the type in the latest patch is: data BinomForest rk a = Nil | Skip !(BinomForest (Succ rk) a) | Cons {-# UNPACK #-} !(BinomTree rk a) !(BinomForest (Succ rk) a) data BinomTree rk a = BinomTree a (rk a) data Succ rk a = Succ {-# UNPACK #-} !(BinomTree rk a) (rk a) data Zero a = Zero I suspect the following might be faster: data BinomForest2 a = Empty | NonEmpty a [BinomTree2 a] data BinomTree2 a = BinomTree2 a [BinomTree2 a] This eliminates the "Skip" constructor, which contributes only to the nested type guarantee.