
Hi,
The real reason behind my surprise was, that I was wondering how more modern languages could make inroads into such an environment. Haskell without recursion and dynamic memory allocation? Hard to imagine.
for some safety critical applications that require certification, SCADE with the underlying Lustre language is used. It has formal semantics and some properties can be checked with automatic verification (up to linear arithmetic). The Lustre dataflow programs are then transformed to C (or Ada), by compiling to automata and representing those in C. This transformation is realized in Ocaml afaik. I think these kind of transformations, maybe specifying an embedded DSL instead of using a language like Lustre resulting in C with static bounds etc. could be a way to use Haskell. But anything using dynamic memory allocation, GC etc. is dangerous for real-time bounds. These programs also often run on old "outdated", but reliable HW for which mainly C compilers exist. The used standards only allow a certain set of programming languages (or subsets of these languages) and you even have to use "proved and tested" compilers and tools.
I have a hunch that the real restrictions of this kind of software are not concerned with fixed memory, iterations, whatever, but rather with guaranteed bounds. If that is indeed the case, how feasible would it be to prove relevant properties for systems programmed in Haskell?
I think proving the correct transformation of some input language to static C using Haskell is possible. Functional languages like Haskell or Ocaml in the case of SCADE are well suited to this, due to lack of loops, side effects etc. I think "atom" is such a DSL embedded into Haskell that can generate code for real-time systems. regards Matthias