
Of these three factors, I see the first one as benign and indeed what I see as ideal semantics. The imposed delay is the theoretical minimum (infinitesimal) while still allowing meaningful specification of self-reactive and mutually-reactive systems. Yes, that's definitely true. But you also need the appropriate combinators to be able to exploit this potential.
For 3, I know of no benefit to having integration sample times correspond to display times, or even to match the frequency. The fact that the test uses integral is really beside the point. We can talk about any arbitrary behaviour and its changes.
The heart of the problem is that according to the current Reactive approach there's no way to derive an event solely from behaviours, since we inevitably need sampling events to get to the values. In order to avoid unnecessary delays in the presence of mandatory infinitesimal delays, we'd have to make sure that if A depends on B, then A is switched only after some non-zero time elapsed since the switch of B, otherwise we'll observe the propagation of delays as in the example. It is big enough of a problem to generate appropriate sampling events all around the system that take the dynamically changing dependencies into consideration (in fact, this is a huge burden on the programmer!), but there's obviously no way to do so in the presence of a dependency cycle. And again, we arrive at the need of breaking cycles somehow. You even have to be explicit about the place of this break if you want a deterministic system. Because of all this, I believe that Fran's predicate events make much more sense at least in the semantic basis. You're among the most knowledgeable about all the issues concerning them, and I'm sure you had a fleet of good reasons to give up on this approach, but the Reactive way doesn't seem to be the right answer either.
to use a variable-step-size method that adapts to both the nature of the behavior being integrated and to the available compute cycles. I'm afraid there's no way that would fit every application, so any kind of advanced sampling can only be an option. After all, even a single application can have wildly different stability properties if you start tweaking some parameters. And you have to be able to tell apart desired behaviour from unwanted artifacts. If you look at a real-time physics engine, you'll see trade-offs everywhere. For instance, allowing some interpenetration between supposedly solid objects improves stability when there are lots of collisions around without resorting to extreme supersampling. It is up to you how much (both in time and space) inconsistency you can live with for the sake of real-time reactivity.
In short, I don't think adaptive time step is the real solution here. You'll have to be able to express trade-offs in general in order to get a practical system, and playing around with the sampling rate is just one special case. Committing to it means not letting the programmer decide whether they prefer smooth or accurate animation, or more precisely put, where on that scale their preference lies.
As for 2, I've used much better integration methods in the past. As I said, this is not really about integration, which only served as an example. It's about reactivity in general.
Gergely -- http://www.fastmail.fm - The way an email service should be