
On 02.11 17:38, Sebastian Sylvan wrote:
There ought to be a benchmark where you perform an operation and then, based on some random factor, either proceed with the next operation using the resulting queue, or using the queue before the operation you just performed. That happens a lot in practice and will show the strength of Data.Queue better (it's designed specifically to spread out the cost of the list reversal).
Taking the head of the queue is one such operation in practise. Adding a persistent version of tail (take tail and discard it) would be harder as the sequences would need a deepSeq instance...
Also. I'm interested in looking at benchmarks where the time of each operation is squared (or something like that), to show how well suited they are for real-time operations (where it's better for it to constantly take 1ms than for it to take 1µs on average but every now and then take 50 ms).
I am afraid this would mostly note whenever the garbage collector starts to work. But analyzing the results in a better manner would be a fun exercise... - Einar Karttunen