Criteria for determining if a recursive function can be implemented in constant memory

Dear Cafe, since a little discussion with my tutor I am wondering how the following problem can be solved and if it is decidable: Given the definition of a recursive function f in, say, haskell, determine if f can be implemented in O(1) memory. First I thought the solution would be "check if f is tail-recursive". However, consider the following definition:
-- for the sake of uniformity if' c t e = if c then t else e
f :: (Num a) => a -> a -> a f x y = if' (x <= 0) y $ if' (x >= 2) (f (x-2) (y+1)) (f (x-1) y)
(I don't really care what this function computes as long as it terminates, which is obvious) Although ghc will probably not perform this optimization, f can be realized in O(1) mem: // trying to duplicate f's structure as closely as possible double f( double x, double y ) { START: if ( x <= 0 ) return y; else { if ( x >= 2 ) { x = x - 2; y = y + 1; goto START; } else { x = x - 1; y = y; goto START; } } } It is crucial that (the second) if' does not use both of its last two arguments, but only one. If we replace the second if' by, say
g :: (Num a) => c -> a -> a -> a g c t e = if' c (t + e) (t - e)
, then we have to compute *both* (f (x-2) (y+1)) and (f (x-1) y), and x and y have to be kept in memory during the call to (f (x-2) (y+1)), therefore f cannot be implemented in constant memory. (read: I haven't found a way which does not radically alter f's structure). So, does someone know how to solve this or can prove that it can't be solved? Best regards, Steffen

Hi Steffen, Steffen Schuldenzucker wrote:
since a little discussion with my tutor I am wondering how the following problem can be solved and if it is decidable:
Given the definition of a recursive function f in, say, haskell, determine if f can be implemented in O(1) memory.
Constant functions are implementable in O(1) memory, but interpreters for turing-complete languages are not, so the property of being implementable in O(1) memory is non-trivial and therefore, by Rice's theorem, undecidable. Tillmann

Tillmann Rendel wrote:
Hi Steffen,
Steffen Schuldenzucker wrote:
Given the definition of a recursive function f in, say, haskell, determine if f can be implemented in O(1) memory.
Constant functions are implementable in O(1) memory, but interpreters for turing-complete languages are not, so the property of being implementable in O(1) memory is non-trivial and therefore, by Rice's theorem, undecidable.
Damn Rice's theorum, spoiling everybody's fun all the time... ;-) Of course, as I understand it, all the theorum says is that no single algorithm can give you a yes/no answer for *every* possible case. So the next question is "is it decidable in any 'interesting' cases?" Then of course you have to go define 'interesting'...

On 7/5/2010 8:33 PM, Andrew Coppin wrote:
Tillmann Rendel wrote:
Hi Steffen,
Steffen Schuldenzucker wrote:
Given the definition of a recursive function f in, say, haskell, determine if f can be implemented in O(1) memory.
Constant functions are implementable in O(1) memory, but interpreters for turing-complete languages are not, so the property of being implementable in O(1) memory is non-trivial and therefore, by Rice's theorem, undecidable.
Damn Rice's theorum, spoiling everybody's fun all the time... ;-)
Definitely! Thanks, Tillmann, for this quite clear answer.
Of course, as I understand it, all the theorum says is that no single algorithm can give you a yes/no answer for *every* possible case. So the next question is "is it decidable in any 'interesting' cases?"
Then of course you have to go define 'interesting'...
Yes, perhaps I should reformulate my original question to something like "What is a good algorithm for transforming an algorithm written in a functional language to constant-memory imperative code? Which properties must the functional version satisfy?" (one answer would be tail-call optimization, but, as I pointed out in my first post, I guess this isn't the whole story) or even: "Can you tell me an example of a set of functionally-defined algorithms maximal in the property that there exists a single algorithm which transforms them all into constant-memory imperative code?" -- Steffen

On Jul 6, 2010, at 12:23 AM, Steffen Schuldenzucker wrote:
Given the definition of a recursive function f in, say, haskell, determine if f can be implemented in O(1) memory.
How are you supposed to handle integer arithmetic? If you don't take the size of integers into account, then since a Turing machine can do any computation, it can run a Haskell interpreter, and since a Turing machine's tape can be modelled by a single integer (or more conveniently by two), any Haskell function can be implemented in O(1) Integers. If you do take the size of integers into account, then pow2 n = loop n 1 where loop 0 a = a loop (m+1) a = loop m (a+a) requires O(n) memory.

Forwarding this message to the list.
No, I didn't think about the size of integers. For now, let all numbers
have some bounded size.
-------- Original Message --------
Subject: Re: [Haskell-cafe] Criteria for determining if a recursive
function can be implemented in constant memory
Date: Tue, 6 Jul 2010 13:25:57 +1200
From: Richard O'Keefe
Given the definition of a recursive function f in, say, haskell, determine if f can be implemented in O(1) memory.
How are you supposed to handle integer arithmetic? If you don't take the size of integers into account, then since a Turing machine can do any computation, it can run a Haskell interpreter, and since a Turing machine's tape can be modelled by a single integer (or more conveniently by two), any Haskell function can be implemented in O(1) Integers. If you do take the size of integers into account, then pow2 n = loop n 1 where loop 0 a = a loop (m+1) a = loop m (a+a) requires O(n) memory.

If your integers have a bounded size, then your Turing machine is not Turing complete and can't run a Haskell interpreter. You might be tempted to just make the numbers really big, but bounded, and then say that you can still run most interesting programs while skirting the issue of non computability. But you will find that all the problems that are now theoretically solvable are still computationally intractable, and you are practically back where you started. This doesn't mean that you can't make programs that can determine very interesting non-trivial properties of programs/functions, but it does mean that there is no way to make them always work. - Job On Tue, Jul 6, 2010 at 9:37 AM, Steffen Schuldenzucker < sschuldenzucker@uni-bonn.de> wrote:
Forwarding this message to the list.
No, I didn't think about the size of integers. For now, let all numbers have some bounded size.
-------- Original Message -------- Subject: Re: [Haskell-cafe] Criteria for determining if a recursive function can be implemented in constant memory Date: Tue, 6 Jul 2010 13:25:57 +1200 From: Richard O'Keefe
To: Steffen Schuldenzucker On Jul 6, 2010, at 12:23 AM, Steffen Schuldenzucker wrote:
Given the definition of a recursive function f in, say, haskell, determine if f can be implemented in O(1) memory.
How are you supposed to handle integer arithmetic?
If you don't take the size of integers into account, then since a Turing machine can do any computation, it can run a Haskell interpreter, and since a Turing machine's tape can be modelled by a single integer (or more conveniently by two), any Haskell function can be implemented in O(1) Integers.
If you do take the size of integers into account, then pow2 n = loop n 1 where loop 0 a = a loop (m+1) a = loop m (a+a) requires O(n) memory.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
participants (5)
-
Andrew Coppin
-
Job Vranish
-
Richard O'Keefe
-
Steffen Schuldenzucker
-
Tillmann Rendel