Re: arrays and lamda terms

Cagdas Ozgenc wrote:
Is it possible to encode an array using lamda terms only, and recover the term specified by an index term in O(1) time?
I'd like to go on a limb and argue that there is generally no such thing as O(1) array access operation. On the conventional hardware, the fastest access to the i-th element of an array 'ar' is expressed by the following formula, which underlies C "arrays": ar[i] = *(ar + i) The formula involves the addition of two numbers -- which is, generally, a O(log(n)) operation. Granted, if we operate in the restricted domain of mod 2^32 integers, addition can be considered O(1). If we stay in this domain however, the size of all our arrays is limited to M (which is 2^32, often less). If we implement our arrays as linked lists, the cost of accessing one element will not exceed M*2*cost(dereference), which is constant, which is O(1). Thus pedantically speaking, within the restricted integral domain, any terminating array access method is O(1). The choice of "hardware" may greatly affect the apparent complexity. For example, associative memory access is difficult on conventional architectures and fast for holographic memory.
Or is it possible in any rewriting system?
If all your arrays are limited in size to 2^32, and if your re-writing system offers projection operators pi_0, pi_1, ... pi_4294967295 (i.e., the right "hardware"), well, then it is possible.

The formula involves the addition of two numbers -- which is, generally, a O(log(n)) operation. Granted, if we operate in the restricted domain of mod 2^32 integers, addition can be considered O(1). If we stay in this domain however, the size of all our arrays is limited to M (which is 2^32, often less). If we implement our arrays as linked lists, the cost of accessing one element will not exceed M*2*cost(dereference), which is constant, which is O(1). Thus pedantically speaking, within the restricted integral domain, any terminating array access method is O(1).
Yes but then it will unstable complexity, whereas usually when "array access" is considered worst/best/average case complexity are equal(i.e. one can write a program to a hard realtime system). With linked list implementation you'll never have that.
The choice of "hardware" may greatly affect the apparent complexity. For example, associative memory access is difficult on conventional architectures and fast for holographic memory.
Very true.
Or is it possible in any rewriting system?
If all your arrays are limited in size to 2^32, and if your re-writing system offers projection operators pi_0, pi_1, ... pi_4294967295 (i.e., the right "hardware"), well, then it is possible.
The projection operators all should take equal amount of reduction steps. So, does that mean it is not possible to do this in lamda calculus if the hardware only supports alpha,beta reduction for example (and without cheating behind the scenes for projection functions)?
participants (2)
-
Cagdas Ozgenc
-
oleg@pobox.com