
Jacques Carette
On 22/11/2012 11:52 AM, Brandon Allbery wrote:
On Thu, Nov 22, 2012 at 7:56 AM, Jacques Carette
mailto:carette@mcmaster.ca> wrote:
On 20/11/2012 6:08 PM, Richard O'Keefe wrote:
On 21/11/2012, at 4:49 AM,
mailto:citb@lavabit.com> wrote: Well, I don't know. Would it save some time? Why bother with a core language?
For a high level language (and for this purpose, even Fortran 66 counts as "high level") you really don't _want_ a direct translation from source code to object code. You want to eliminate unused code and you want to do all sorts of analyses and improvements. It is *much* easier to do all that to a small core language than to the full source language.
Actually, here I disagree. It might be much 'easier' for the programmers to do it for a small core language, but it may turn out to be much, much less effective. I 'discovered' this when (co-)writing a partial evaluator for Maple:
You're still using a core language, though; just with a slightly different focus than Haskell's. I already mentioned gcc's internal language, which similarly is larger (semantically; syntactically it's
sexprs). What combination is more appropriate depends on the language and the compiler implementation.
Right, we agree: it is not 'core language' I disagreed with, it is 'smaller core language'. And we also agree that smaller/larger depends on the eventual application. But 'smaller core language' is so ingrained as "conventional wisdom" that I felt compelled to offer evidence against this bit of folklore.
I don't think you're evidence contacts that bit of folklore. But as stated it's vague. In particular, is "smaller" relative to the full source language, or is it absolute (in which case you should compile to a RISC architecture and optimize that :-)? Since the latter seems silly, I have to ask if your core language for Maple was larger than Maple? -- Sent from my Android tablet with K-9 Mail. Please excuse my swyping.