Performance of functional queues

Hello I benchmarked the various functional queue implementations including simple batched queues (data Q = Q [a] [a]), Data.Queue and Data.Sequence. The end result was that the simple batched queues are fast and toList of Data.Sequence is very slow. For most applications Data.Sequence is faster than Data.Queue, but slower than the batched queues, but the differences are not huge. Of course it offers other functionality which the other queues don't provide. For pics of the results and the code look at http://www.cs.helsinki.fi/u/ekarttun/haskell/sequence-performance/ - Einar Karttunen

On 11/2/05, Einar Karttunen
Hello
I benchmarked the various functional queue implementations including simple batched queues (data Q = Q [a] [a]), Data.Queue and Data.Sequence.
The end result was that the simple batched queues are fast and toList of Data.Sequence is very slow. For most applications Data.Sequence is faster than Data.Queue, but slower than the batched queues, but the differences are not huge. Of course it offers other functionality which the other queues don't provide.
For pics of the results and the code look at http://www.cs.helsinki.fi/u/ekarttun/haskell/sequence-performance/
There ought to be a benchmark where you perform an operation and then, based on some random factor, either proceed with the next operation using the resulting queue, or using the queue before the operation you just performed. That happens a lot in practice and will show the strength of Data.Queue better (it's designed specifically to spread out the cost of the list reversal). Also. I'm interested in looking at benchmarks where the time of each operation is squared (or something like that), to show how well suited they are for real-time operations (where it's better for it to constantly take 1ms than for it to take 1µs on average but every now and then take 50 ms). /S -- Sebastian Sylvan +46(0)736-818655 UIN: 44640862

On 02.11 17:38, Sebastian Sylvan wrote:
There ought to be a benchmark where you perform an operation and then, based on some random factor, either proceed with the next operation using the resulting queue, or using the queue before the operation you just performed. That happens a lot in practice and will show the strength of Data.Queue better (it's designed specifically to spread out the cost of the list reversal).
Taking the head of the queue is one such operation in practise. Adding a persistent version of tail (take tail and discard it) would be harder as the sequences would need a deepSeq instance...
Also. I'm interested in looking at benchmarks where the time of each operation is squared (or something like that), to show how well suited they are for real-time operations (where it's better for it to constantly take 1ms than for it to take 1µs on average but every now and then take 50 ms).
I am afraid this would mostly note whenever the garbage collector starts to work. But analyzing the results in a better manner would be a fun exercise... - Einar Karttunen

On Wed, Nov 02, 2005 at 06:06:55PM +0200, Einar Karttunen wrote:
I benchmarked the various functional queue implementations including simple batched queues (data Q = Q [a] [a]), Data.Queue and Data.Sequence.
The end result was that the simple batched queues are fast and toList of Data.Sequence is very slow. For most applications Data.Sequence is faster than Data.Queue, but slower than the batched queues, but the differences are not huge. Of course it offers other functionality which the other queues don't provide.
Also Data.Queue (Okasaki's real-time queues) and Data.Sequence (fingertrees) retain their performance bounds under persistent use, while batched queues don't. That's an important property for a general purpose library. Repeatedly taking the head doesn't show this, as batched queues keep the first list nonempty unless the whole queue is empty. It would show if you repeatedly computed head (tail (Q [x] long_list))
participants (3)
-
Einar Karttunen
-
Ross Paterson
-
Sebastian Sylvan