
Hello, I've written a function to compute the general Josephus problem, giving both the number of the survivor and the order in which the others are killed. However, I am not overly fond of my actual Haskell implementation, so I would like some comments on elegance. My current function is as follow:: josephus k n = josephus' k 1 [1..n] [] where josephus' k i (x:xs) sorted | xs == [] = (x, reverse sorted) | i `mod` k == 0 = josephus' k (i+1) xs ([x] ++ sorted) | otherwise = josephus' k (i+1) (xs ++ [x]) sorted The biggest difficulty I had is representing the circle (a continuous unit, unlike the list). The only solution I've found is to explicitly concatenate lists during each iteration, using an index to check whether the current element should be kept. It seems to me that manupilating lists so often makes the resulting function unclear, and hides the basic principle behind the Josephus sequence. So, I'm looking for a better way to write this function, but enlightenment eludes me. I've taken up Haskell in earnest a couple weeks ago, after a fairly long stay in Lisp land (perhaps it shows). My previous "Eureka!" moment in Haskell was matrix multiplication, along the lines of: matProd a b = [[sum (zipWith (*) x y) | y <- transpose b]| x <- a] Thanks! Anthony Chaumas-Pellet

On Sun, Apr 01, 2007 at 06:24:23PM +0200, Anthony Chaumas-Pellet wrote:
I've written a function to compute the general Josephus problem, giving both the number of the survivor and the order in which the others are killed. However, I am not overly fond of my actual Haskell implementation, so I would like some comments on elegance. My current function is as follow::
josephus k n = josephus' k 1 [1..n] [] where josephus' k i (x:xs) sorted | xs == [] = (x, reverse sorted) | i `mod` k == 0 = josephus' k (i+1) xs ([x] ++ sorted) | otherwise = josephus' k (i+1) (xs ++ [x]) sorted
You show a bias towards tail recursion. It would be neater (and lazier) to return the executed ones incrementally. This is easier if you don't distinguish the survivor from the rest, i.e. just put it on the end of the list. If you represent the circle with the sequence in Data.Sequence instead of a list, you reduce the cost of adding at the end, for a cost of O(k) per execution. But there's a cheaper way of rotating by k-1 elements (at least for large n and k). Instead of moving k-1 elements one at a time to the other end, you could split after (k-1) `mod` length xs elements and append the pieces in the opposite order, for a cost of O(log k).

Here's the sequence version: import Data.Sequence as Seq josephus k n = reduce (fromList [1..n]) where reduce xs | Seq.null xs = [] | otherwise = case viewl (rotate (k-1) xs) of x :< xs' -> x : reduce xs' rotate i xs = back >< front where (front, back) = Seq.splitAt (i `mod` Seq.length xs) xs

Thanks for your comments everyone! There is one point that has left me
puzzled, though.
From: Ross Paterson
You show a bias towards tail recursion. It would be neater (and lazier) to return the executed ones incrementally. This is easier if you don't distinguish the survivor from the rest, i.e. just put it on the end of the list.
Why is tail recursion a bad thing for a finite function? (Of course, it would be... curious on functions that can produce infinite results) Is tail recursion simply not the most common Haskell idiom, or is there some technical reason I fail to see? Anthony Chaumas-Pellet

Anthony Chaumas-Pellet
From: Ross Paterson
You show a bias towards tail recursion. It would be neater (and lazier) to return the executed ones incrementally.
Why is tail recursion a bad thing for a finite function?
Tail recursion tends to create space-leaks, which in turn hurt time performance. Regards, Malcolm

Malcolm Wallace wrote:
Anthony Chaumas-Pellet
wrote: Why is tail recursion a bad thing for a finite function? Is tail recursion simply not the most common Haskell idiom, or is there some technical reason I fail to see?
Tail recursion tends to create space-leaks, which in turn hurt time performance.
That's only part of the truth. Take for example the ordinary version of and :: (a -> Bool) -> [a] -> Bool and p [] = False and p (x:xs) = p x && and xs and it's tail-recursive cousin and' p [] b = b and' p (x:xs) b = and' p xs (b && p x) Apparently, the function is True if there is some element in the list that fulfills the condition p. Clearly, we can stop traversing the list when we find the first element that satisfies p. Now, the tail-recursive version is doomed to traverse the entire list, but thanks to lazy evaluation, the ordinary version stops as soon as an element x with p x is found. Of course, one could alter the definition of and' to achieve early stopping and' _ _ True = True and' p (x:xs) False = and' p xs (p x) and' _ [] False = False but this is not advisable: we unfolded the definition of (&&). In the ordinary case however, (&&) already contains all the magic of early stopping, the recursion scheme has nothing to do with it. Thus, the most common Haskell idiom about tail recursion is to not think about it (and hence not use it). Instead, return values as early as possible (in some cases (&&) can return a definite answer by looking at the first argument only). Note that this is very different from strict functional languages. Regards, apfelmus

Here's a solution that I think is a bit more elegant. -Paul josephus n k = let loop xs = let d:r = drop (k-1) xs in d : loop (filter (/= d) r) in take n (loop (cycle [1..n])) Anthony Chaumas-Pellet wrote:
Hello,
I've written a function to compute the general Josephus problem, giving both the number of the survivor and the order in which the others are killed. However, I am not overly fond of my actual Haskell implementation, so I would like some comments on elegance. My current function is as follow::
josephus k n = josephus' k 1 [1..n] [] where josephus' k i (x:xs) sorted | xs == [] = (x, reverse sorted) | i `mod` k == 0 = josephus' k (i+1) xs ([x] ++ sorted) | otherwise = josephus' k (i+1) (xs ++ [x]) sorted
The biggest difficulty I had is representing the circle (a continuous unit, unlike the list). The only solution I've found is to explicitly concatenate lists during each iteration, using an index to check whether the current element should be kept.
It seems to me that manupilating lists so often makes the resulting function unclear, and hides the basic principle behind the Josephus sequence. So, I'm looking for a better way to write this function, but enlightenment eludes me.
I've taken up Haskell in earnest a couple weeks ago, after a fairly long stay in Lisp land (perhaps it shows). My previous "Eureka!" moment in Haskell was matrix multiplication, along the lines of:
matProd a b = [[sum (zipWith (*) x y) | y <- transpose b]| x <- a]
Thanks! Anthony Chaumas-Pellet

On Sun, 2007-04-01 at 16:46 -0400, Paul Hudak wrote:
Here's a solution that I think is a bit more elegant.
-Paul
josephus n k = let loop xs = let d:r = drop (k-1) xs in d : loop (filter (/= d) r) in take n (loop (cycle [1..n]))
Lovely. .. must.. resist ... urge to fuse ... Actually the interesting thing that makes this example tricky to fuse using automagic techniques is that while loop looks like it should be a good producer and consumer, it's also recursive. Hmm, interesting. Duncan

On Mon, Apr 02, 2007 at 09:12:17AM +1000, Duncan Coutts wrote:
On Sun, 2007-04-01 at 16:46 -0400, Paul Hudak wrote:
Here's a solution that I think is a bit more elegant.
-Paul
josephus n k = let loop xs = let d:r = drop (k-1) xs in d : loop (filter (/= d) r) in take n (loop (cycle [1..n]))
Lovely.
.. must.. resist ... urge to fuse ...
Resist -- there's little to gain from fusing an asymptotically slower algorithm. (This is O(n^2) compared with the original O(nk) and a possible O(n log k).)
participants (7)
-
Anthony Chaumas-Pellet
-
Anthony Chaumas-Pellet
-
apfelmus
-
Duncan Coutts
-
Malcolm Wallace
-
Paul Hudak
-
Ross Paterson