
* Donn Cave:
Quoth Florian Weimer
, wikipedia: "Managed code is a differentiation coined by Microsoft to identify computer program code that requires and will only execute under the "management" of a Common Language Runtime virtual machine (resulting in Bytecode)."
I like this term, I apply it by extension to any system which enforces memory safety (as long as you stick to non-internals, such as array indexing, even Java has got (in practical terms) fairly portable PEEK/POKE operations).
... enforces memory safety? Can't that be done in code compiled straight to CPU instructions without even calling a run-time, let alone being managed by one?
It is quite possible to have memory-unsafe interpreted languages. I wouldn't necessary call them "managed". For example, a simple interpreted Forth might have explicit words for heap allocation and deallocation. This isn't a interpreted/compiled distinction (and the line between the two is increasingly blurred because there are high-performance compilers these days for languages which are traditionally considered interpreted). You need restrictions on memory deallocation (which must follow a stack or region descipline), and thus indirectly on allocation. I don't think region inference has caught on, which suggests that there are inherent limitations when trying to apply it to application code.
I'm not sure what you mean, but in this case, the term has surely been extended too far - it doesn't seem to be managed code in the sense quoted above, nor does it seem to fit with common non-technical usage of "managed".
Not really---many C programmers associate memory safety with micromanagement by the language environment.
In other words, a new way to say `interpreted',
This term doesn't really apply that well to the CLR because its bytecode really can't be interpreted efficiently. (The bytecodes lack type information.)
To me, the CLR takes bytecode, maps it in some way to CPU machine code and executes the latter. I would call that "interpreted", but you wouldn't? Or am I wrong about what's happening? I know Wikipedia isn't necessarily the ultimate authority of computer science, but "Many interpreted languages are first compiled to some form of virtual machine code, which is then either interpreted or compiled at runtime to native code." (... going on to cite Java and .NET Framework among others), so I'm not the only one who finds it expedient to use that word "interpreted" in this context.
I think the word is increasingply meaningless. Current x86 CPUs (excluding the low-power variants) turn transform the instruction stream before execution, too. Your VM hypervisor might emulate the effect of certain instruction sequences which cannot be virtualized solely in hardware. C language environments contain run-time code generation to tune for the specific CPU on which they are running. And historically, even C was compiled to bytecode, so that large programs could fit into available RAM. And so on.
At this point, I still don't know for sure if you think a GHC-compiled Haskell program is managed, or unmanaged, but I think managed?
Yupp, it's managed (at least according to my own use of the word). Haskell tries quite hard to protect its own abstractions (which is essentially another view on memory safety).
And in my confused present state I would say it's unmanaged.
Johan Tibell speaks of the "GHC I/O manager", so I think I'm in good company.