If your integers have a bounded size, then your Turing machine is not Turing complete and can't run a Haskell interpreter.
You might be tempted to just make the numbers really big, but bounded, and then say that you can still run most interesting programs while skirting the issue of non computability. But you will find that all the problems that are now theoretically solvable are still computationally intractable, and you are practically back where you started.
This doesn't mean that you can't make programs that can determine very interesting non-trivial properties of programs/functions, but it does mean that there is no way to make them always work.
- Job
Forwarding this message to the list.
No, I didn't think about the size of integers. For now, let all numbers have some bounded size.
-------- Original Message --------
Subject: Re: [Haskell-cafe] Criteria for determining if a recursive function can be implemented in constant memory Date: Tue, 6 Jul 2010 13:25:57 +1200 From: Richard O'Keefe <ok@cs.otago.ac.nz> To: Steffen Schuldenzucker <sschuldenzucker@uni-bonn.de>
On Jul 6, 2010, at 12:23 AM, Steffen Schuldenzucker wrote: > Given the definition of a recursive function f in, say, haskell, > determine if f can be implemented in O(1) memory. How are you supposed to handle integer arithmetic? If you don't take the size of integers into account, then since a Turing machine can do any computation, it can run a Haskell interpreter, and since a Turing machine's tape can be modelled by a single integer (or more conveniently by two), any Haskell function can be implemented in O(1) Integers. If you do take the size of integers into account, then pow2 n = loop n 1 where loop 0 a = a loop (m+1) a = loop m (a+a) requires O(n) memory.
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe