A question about causality in FRP

Hi everyone, Not sure this is the right venue, as this is a question about the semantics of FRP rather than Haskell... I have a question about the definition of "causality" for stream functions. A quick recap... Given a totally ordered set T (of times), the type of behaviours Beh A is defined to be (T -> A). Behaviours come with an equivalence relation a =t b (equal up to time t) defined as: a =t b iff forall s <= t . a(s) = b(s) A function (f : Beh A -> Beh B) is causal whenever it respects =t, i.e. (forall t . a =t b => f a =t f b). This is fine and good, until we hit nested behaviours, where there are oddities. Consider the function: double : Beh A -> Beh Beh A double a t u = a u This fails to be causual, because (using list notation for behaviours where the time domain is natural numbers) we have: [0,1,2,...] 0 = 0 = [0,2,4,...] 0 double [0,1,2,...] 0 = [0,1,2,...] != [0,2,4,...] = double [0,2,4,...] On the other hand: weird : Beh Beh A -> Beh A weird a t = a t (t + 1) is causal, even though on the surface the behaviour at time t depends on the behaviour at time t+1. An alternate definition, which allows double and disallows weird, is a "deep" equality: a ~t b iff forall s <= t . a(s) ~s b(s) Note that this depends on deep equality being defined at type A. Some appropriate definitions for ~t at other types includes: at type A -> B: f ~t g iff forall s <= t, a ~s b implies f a ~s f b at type (A,B): (a1,a2) ~t (b1,b2) iff (a1 ~t b1) and (a2 ~t b2) [And yes, I don't think it's a coincidence that this looks a lot like a parametricity definition or step-indexed logical relation.] Has such a definition of "deep" causality been investigated? Alan.

On Thu, Oct 13, 2011 at 7:54 AM, Alan Jeffrey
A function (f : Beh A -> Beh B) is causal whenever it respects =t, i.e. (forall t . a =t b => f a =t f b).
Yes. Function outputs only depend on the past values of the input function. Your solutions for double and weird are accurate. Double is lifting the future at each instant into the present, which is obviously not causal. And the `weird` function presumes you already have a obtained a complete view of a behavior at each instant. The `problem` such as it exists: you will be unable to causally construct the argument to the `weird` function, except by modeling a nested/simulated world (i.e. modeling one FRP system within another). This is not an unrealistic endeavor, e.g. one might model the future position of a thrown baseball in order to predict it. In this sense, `weird` is not weird. If you want first-class behaviors or behavior transformers, those will need a different abstraction than 'nested' behaviors. Nested != First Class. You'd have special functions to lift a first-class behavior as an argument (e.g. add a phantom type to prohibit non-causal observation), and to lower it into the main system for sampling (e.g. ArrowApply). `Deep` causality is not so much an issue, since it often makes perfect sense to simulate one FRP behavior within another. Regards, Dave

David Barbour
If you want first-class behaviors or behavior transformers, those will need a different abstraction than 'nested' behaviors. Nested != First Class. You'd have special functions to lift a first-class behavior as an argument (e.g. add a phantom type to prohibit non-causal observation), and to lower it into the main system for sampling (e.g. ArrowApply).
The usual model for arrowized FRP is based on this type: newtype Auto a b = Auto (a -> (b, Auto a b)) I would be very interested in how you would write an ArrowApply instance for such a type. So far my conclusion is: It's impossible. Do you have a different AFRP model in mind? Greets, Ertugrul -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://ertes.de/

On Fri, Oct 14, 2011 at 1:31 AM, Ertugrul Soeylemez
David Barbour
wrote: If you want first-class behaviors or behavior transformers, those will need a different abstraction than 'nested' behaviors. Nested != First Class. You'd have special functions to lift a first-class behavior as an argument (e.g. add a phantom type to prohibit non-causal observation), and to lower it into the main system for sampling (e.g. ArrowApply).
The usual model for arrowized FRP is based on this type:
newtype Auto a b = Auto (a -> (b, Auto a b))
I would be very interested in how you would write an ArrowApply instance for such a type. So far my conclusion is: It's impossible.
Interesting claim. The implementation is obvious enough: runAuto (Auto f) = f app = Auto $ \ (f,x) -> let (x',f') = runAuto f x in (x',app) Which arrow laws does this violate? Or is your concern that a fresh arrow supplied to `app` at each instant obviously cannot accumulate state? Yampa AFRP model chooses to model products using the `Either` type - i.e. indicating that either element can be updated independently. Using this, one could accumulate state in the captured arrow, though there'd be a funky reset whenever the arrow is updated. Getting back on topic, protecting causality requires that we prevent sampling of a first-class Auto element. One way to do this is to use a Rank2Type similar to the ST monad: newtype Auto era a b = Auto { unAuto :: a -> (b, Auto era a b) } runAuto :: (forall era . Auto era a b) -> [a] -> [b] This would prevent developers from `running` an Auto of a known era, except by protected mechanisms such as ArrowApply. Simulations would need to be era-independent. The reactive model I'm developing, Reactive Demand Programming, is actually anti-causal: behavior at any given instant may depend only upon its present and future inputs (anticipation), but never the past. State is treated as an external service, part of an abstract machine, orchestration of registers or a database. I think this setup works better than FRP, e.g. for controlling space-leaks, supporting smooth transitions and upgrades of dynamic behavior, modeling the app as a whole as dynamic, and orthogonal persistence.

David Barbour
The usual model for arrowized FRP is based on this type:
newtype Auto a b = Auto (a -> (b, Auto a b))
I would be very interested in how you would write an ArrowApply instance for such a type. So far my conclusion is: It's impossible.
Interesting claim. The implementation is obvious enough: runAuto (Auto f) = f app = Auto $ \ (f,x) -> let (x',f') = runAuto f x in (x',app)
Which arrow laws does this violate? Or is your concern that a fresh arrow supplied to `app` at each instant obviously cannot accumulate state?
It's not about the laws, it's about losing state.
Yampa AFRP model chooses to model products using the `Either` type - i.e. indicating that either element can be updated independently. Using this, one could accumulate state in the captured arrow, though there'd be a funky reset whenever the arrow is updated.
Of course you can make a trade-off, but I don't think it's possible to solve this in a clean way in the automaton model.
The reactive model I'm developing, Reactive Demand Programming, is actually anti-causal: behavior at any given instant may depend only upon its present and future inputs (anticipation), but never the past. State is treated as an external service, part of an abstract machine, orchestration of registers or a database. I think this setup works better than FRP, e.g. for controlling space-leaks, supporting smooth transitions and upgrades of dynamic behavior, modeling the app as a whole as dynamic, and orthogonal persistence.
I would be very interested in such a model. Are there any resources online? Greets, Ertugrul -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://ertes.de/

On Fri, Oct 14, 2011 at 3:07 AM, Ertugrul Soeylemez
It's not about the laws, it's about losing state.
I think you should not accumulate state; the abstraction gives me a fresh arrow each instant, conceptually and pragmatically. But it would not be difficult to create an abstraction specifically for dynamic, stateful accumulators. class (Arrow a) => ArrowAppS a where apps :: st -> a (a (x,st) (y,st), x) y instance ArrowAppS Auto where apps s0 = Auto $ \ (f,x) -> let ((y,sf),f') = runAuto f (x,s0) in (y,apps sf)
The reactive model I'm developing, Reactive Demand Programming, is actually anti-causal: behavior at any given instant may depend only upon its present and future inputs (anticipation), but never the past. State is treated as an external service, part of an abstract machine, orchestration of registers or a database. I think this setup works better than FRP, e.g. for controlling space-leaks, supporting smooth transitions and upgrades of dynamic behavior, modeling the app as a whole as dynamic, and orthogonal persistence.
I would be very interested in such a model. Are there any resources online?
Just my blog. Here are a few select articles: * http://awelonblue.wordpress.com/2011/05/21/comparing-frp-to-rdp/ * http://awelonblue.wordpress.com/2011/05/28/anticipation-in-rdp/ * http://awelonblue.wordpress.com/2011/08/26/demand-monitors-heart-and-soul-rd... My code is on github, but I still break it on a regular basis. Regards, Dave

David Barbour wrote:
Alan Jeffrey wrote:
A function (f : Beh A -> Beh B) is causal whenever it respects =t, i.e. (forall t . a =t b => f a =t f b).
Yes. Function outputs only depend on the past values of the input function.
Your solutions for double and weird are accurate. Double is lifting the future at each instant into the present, which is obviously not causal. And the `weird` function presumes you already have a obtained a complete view of a behavior at each instant.
The `problem` such as it exists: you will be unable to causally construct the argument to the `weird` function, except by modeling a nested/simulated world (i.e. modeling one FRP system within another). This is not an unrealistic endeavor, e.g. one might model the future position of a thrown baseball in order to predict it. In this sense, `weird` is not weird.
I concur with that. The function double :: Behavior a -> Behavior (Behavior a) double x = const x is not causal because it makes all future values of the behavior x available "at once". However, weird :: Behavior (Behavior a) -> Behavior a weird = join . fmap (. (+1)) where join a t = a t t is clearly causal as a composition of two causal functions. The point is that the innermost behavior was already available "in full", so it's perfectly possible to evaluate it at any time desired. Of course, the function double' x t = \t' -> if t' <= t then x t' else _|_ is causal. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com

On 10/13/2011 10:43 PM, David Barbour wrote:
On Thu, Oct 13, 2011 at 7:54 AM, Alan Jeffrey
mailto:ajeffrey@bell-labs.com> wrote: The `problem` such as it exists: you will be unable to causally construct the argument toith the `weird` function, except by modeling a nested/simulated world (i.e. modeling one FRP system within another). This is not an unrealistic endeavor, e.g. one might model the future position of a thrown baseball in order to predict it. In this sense, `weird` is not weird.
Ah, I think this is a very good summary. It seems that there's an implicit shift of worlds when you nest FRP behaviours. The top level world (the one that reactimate is executing) uses wall-clock time, but nested behaviours are in a different world, where time is simulated. Making these worlds explicit (I never met a problem that couldn't use some more phantom types :-) we have a type Beh W A for a behaviour in world W of type A, and a definition of causality that's indexed by worlds. Writing RW for the top-level real world, and SW for a simulated world, we have: weird : Beh RW (Beh RW A) -> Beh RW A weird b t = b t (t + 1) -- not causal weird : Beh RW (Beh SW A) -> Beh RW A weird b t = b t (t + 1) -- causal and: double : Beh RW A -> Beh RW (Beh RW A) double b t u = b u -- causal double : Beh RW A -> Beh RW (Beh SW A) double b t u = b u -- not causal [Caveat: details not worked out.] Making worlds explicit like this I think helps clarify why one person's "weird" is another person's "perfectly reasonable function" :-) Does something like this help clarify matters? A.

On Fri, Oct 14, 2011 at 7:27 AM, Alan Jeffrey
On 10/13/2011 10:43 PM, David Barbour wrote:
On Thu, Oct 13, 2011 at 7:54 AM, Alan Jeffrey
mailto:ajeffrey@bell-labs.com**> wrote: The `problem` such as it exists: you will be unable to causally construct the argument toith the `weird` function, except by modeling a nested/simulated world (i.e. modeling one FRP system within another). This is not an unrealistic endeavor, e.g. one might model the future position of a thrown baseball in order to predict it. In this sense, `weird` is not weird.
Ah, I think this is a very good summary. It seems that there's an implicit shift of worlds when you nest FRP behaviours. The top level world (the one that reactimate is executing) uses wall-clock time, but nested behaviours are in a different world, where time is simulated.
Making these worlds explicit (I never met a problem that couldn't use some more phantom types :-) we have a type Beh W A for a behaviour in world W of type A, and a definition of causality that's indexed by worlds. Writing RW for the top-level real world, and SW for a simulated world, we have:
weird : Beh RW (Beh RW A) -> Beh RW A weird b t = b t (t + 1) -- not causal [snip]
Making worlds explicit like this I think helps clarify why one person's "weird" is another person's "perfectly reasonable function" :-)
Well, I think you have the concept right. But `weird` is always causal. The burden of violating causality has been shifted to the user of weird. Regards, Dave

I should add that I have a pragmatic reason for asking about causality, which is that over at https://github.com/agda/agda-frp-js I have an implementation of FRP for Agda running in the browser using an Agda-to-JS back end I wrote. In that model, I can see how to implement deep causality, but I can't see how to implement shallow causality, since the back end interfaces to the DOM event and time model. A.
On 10/13/2011 10:43 PM, David Barbour wrote:
On Thu, Oct 13, 2011 at 7:54 AM, Alan Jeffrey
mailto:ajeffrey@bell-labs.com> wrote: The `problem` such as it exists: you will be unable to causally construct the argument toith the `weird` function, except by modeling a nested/simulated world (i.e. modeling one FRP system within another). This is not an unrealistic endeavor, e.g. one might model the future position of a thrown baseball in order to predict it. In this sense, `weird` is not weird. Ah, I think this is a very good summary. It seems that there's an implicit shift of worlds when you nest FRP behaviours. The top level world (the one that reactimate is executing) uses wall-clock time, but nested behaviours are in a different world, where time is simulated. Making these worlds explicit (I never met a problem that couldn't use some more phantom types :-) we have a type Beh W A for a behaviour in world W of type A, and a definition of causality that's indexed by worlds. Writing RW for the top-level real world, and SW for a simulated world, we have:
weird : Beh RW (Beh RW A) -> Beh RW A weird b t = b t (t + 1) -- not causal
weird : Beh RW (Beh SW A) -> Beh RW A weird b t = b t (t + 1) -- causal
and:
double : Beh RW A -> Beh RW (Beh RW A) double b t u = b u -- causal
double : Beh RW A -> Beh RW (Beh SW A) double b t u = b u -- not causal
[Caveat: details not worked out.]
Making worlds explicit like this I think helps clarify why one person's "weird" is another person's "perfectly reasonable function" :-)
Does something like this help clarify matters?
A.

One more thing... The function: return :: a -> Beh a return x t = x fails to be causal when a is itself a behaviour, since it specializes to (after a bit of eta-conversion): return :: Beh a -> Beh (Beh a) return b t u = b u which isn't causal. This rules out return, which in turn means that Beh can't implement the Monad type class. I don't think this impacts arrowized FRP, but it does mean that any causal non-arrowized model can't form a monad, which seems unfortunate. If the "causality" requirement were replaced by "deep causality" then I believe the model would form a monad (cough cough but in a different category cough). A. On 10/14/2011 05:18 PM, Jeffrey, Alan S A (Alan) wrote:
I should add that I have a pragmatic reason for asking about causality, which is that over at https://github.com/agda/agda-frp-js I have an implementation of FRP for Agda running in the browser using an Agda-to-JS back end I wrote.
In that model, I can see how to implement deep causality, but I can't see how to implement shallow causality, since the back end interfaces to the DOM event and time model.
A.
On 10/13/2011 10:43 PM, David Barbour wrote:
On Thu, Oct 13, 2011 at 7:54 AM, Alan Jeffrey
mailto:ajeffrey@bell-labs.com> wrote: The `problem` such as it exists: you will be unable to causally construct the argument toith the `weird` function, except by modeling a nested/simulated world (i.e. modeling one FRP system within another). This is not an unrealistic endeavor, e.g. one might model the future position of a thrown baseball in order to predict it. In this sense, `weird` is not weird. Ah, I think this is a very good summary. It seems that there's an implicit shift of worlds when you nest FRP behaviours. The top level world (the one that reactimate is executing) uses wall-clock time, but nested behaviours are in a different world, where time is simulated. Making these worlds explicit (I never met a problem that couldn't use some more phantom types :-) we have a type Beh W A for a behaviour in world W of type A, and a definition of causality that's indexed by worlds. Writing RW for the top-level real world, and SW for a simulated world, we have:
weird : Beh RW (Beh RW A) -> Beh RW A weird b t = b t (t + 1) -- not causal
weird : Beh RW (Beh SW A) -> Beh RW A weird b t = b t (t + 1) -- causal
and:
double : Beh RW A -> Beh RW (Beh RW A) double b t u = b u -- causal
double : Beh RW A -> Beh RW (Beh SW A) double b t u = b u -- not causal
[Caveat: details not worked out.]
Making worlds explicit like this I think helps clarify why one person's "weird" is another person's "perfectly reasonable function" :-)
Does something like this help clarify matters?
A.
participants (4)
-
Alan Jeffrey
-
David Barbour
-
Ertugrul Soeylemez
-
Heinrich Apfelmus