
Perhaps it's just my lack of imagination that was driving my original question. I'm just having a hard time imagining how to write reasonably interesting algorithms that way.
Very likely they have very specific functionality and hopefully a precise specification about what to do if the memory bounds are reached (at least some "failsafe" mode etc.) Many of these programs are very simple, but deliberately so.
As I wrote, they might "cheat". It's entirely possible to implement dynamic memory on top of fixed-size arrays and use indexes instead of pointers. Of course, I have no idea if that's what they do.
I think it is very likely done that way. I know this kind of programming from Java-smartcards. These support a subset of Java and allow the creation of objects only when installing a program on the card. There is no garbage collection, objects are persistent, i.e. creating new objects at runtime would fill up the available (very low) memory. "Dynamic" creation is done by reusing one "freed" object of the statically allocated ones or returning an error if no free object is available. Systems in high security or safety applications and those with properties like very low memory, very slow CPUs etc. often have requirements that makes direct usage of languages like Haskell very difficult or impossible. They are programmed in a way that makes their behavior as deterministic as possible, often also from the temporal view. If you have dynamic data structures with non-O(1) access it is also not possible to guarantee RT bounds. regards Matthias