
Perhaps the issue is that the manipulations below are purely syntactic, But all the computation rules of the lambda calculus are "syntactic" in that sense. When you can prove things by symbol pushing it's
Greg Woodhouse wrote: the easiest way. But as Paul Hudak mentioned, there definitions that are equal, but you need something stronger to prove it, like fix point induction.
and though they work, it is not at all clear how to make contact with other notions of computability. Perhaps it is that
Y = (\f. (\x. f (x x)) (\x. f (x x)))
is a perfect sensible definition, it's still just a recipe for a computation. Maybe I'm thinking too hard, but it reminds me of the mu operator. Primiive recursive functions are nice and constructive, but minimization is basically a "search", there's no guarantee it will work. If I write
g(x) = mu x (f(x) - x)
then I've basically said "look at every x and stop when you find a fixed point". Likewise, given a candidate for f, it's easy to verify that Y f = f (Y f), as you've shown, but can the f be found without some kind of "search"? There is no search with this defintion (as you can see), it's very constructive.
It computes the fix point which you can also define as oo fix f = lub f^i(_|_) i=0 where f^i is f iterated i times. Is that a definition of fixpoint that makes you happier?
Since there are recursive functions that aren't primitive recursive, the answer has to be "no". Where did primitive recursion come from? Y is used to express general recursion.
Finally, you've exhibited a sequence of fixed points, and in this case it's intuitively clear how these correspond to something we might call an infinite list. But is there an interpetration that makes this precise? The notation
ones = cons 1 ones ones = cons 1 (cons 1 ones) ...
The list ones is the least upper bound in the domain of lazy (integer) lists of the following chain _|_, cons 1 _|_, cons 1 (cons 1 _|_), ... How much domain theory have you studied? Maybe that's where you can find the connection you're missing? -- Lennart