
The characteristics of the "logical" variable are as follows. An "incomplete" data structure (ie. containing free variables) may be returned as a procedure's output. The free variables can later be filled in by other procedures, giving the effect of implicit assignments to a data structure (cf. Lisp's rplaca, rplacd).
There they are *explaining* things to Lisp programmers; not giving the origin of an idea.
If you want to read it that way, it still means that they and their readers were sufficiently aware of the connections that it made sense to explain things this way. I was trying to point out the general context in which Prolog techniques developed, but my impressions are undoubtedly biased by the way I was introduced to Prolog in the late 1980s. If you have an undisputed reference to the original invention of difference lists, where the author(s) explicitly deny any connection to Lisp, I'd be interested.
Also, I thought that Prolog had two origins - one in grammars, the other in logic as a programming language.
See http://en.wikipedia.org/wiki/Definite_clause_grammar This was specifically the focus of Alain Colmerauer.
You may be thinking of Cordell Green's 'The use of theorem-proving techniques in question-answering systems".
No, I haven't read that, yet (I've found the later 'Application of Theorem Proving to Problem Solving' online, but not this one). I was thinking of the later theme of 'Predicate Logic as a Programming Language', 'Algorithm = Logic + Control', etc (here exemplified by titles of Kowalski's papers), but Kowalski points to Green's paper as 'perhaps the first zenith of logic in AI' in his 'The Early Years of Logic Programming' (Kowalski's papers online at: http://www.doc.ic.ac.uk/~rak/), so perhaps that was the start of this theme. Claus