FRP, continuous time and concurrency

Hello, I'm trying to implement something like the Observer pattern in a functional setting. Some people told me that maybe I will find what I want in FRP. The Observer pattern enforces the propagation of changes in the subject to the observers, but in a way where concurrency is not considered, through the notify-update mechanism. Time is not considered because the updating methods are invoked in a synchronous way. Other way to say it is that time is a discrete value which is incremented by one with any subject change and the observers use a pooling strategy in every instance of time. The subject and the observers operate in the same thread. As far as I've seen (but I'm not sure if I'm right) FRP models behaviours and time as continuous entities (although practical implementations use discrete time and discrete sampling in event detection). You must represent the time explicitly and concurrency is used to model behaviors and events. Is there any interpretation of FRP without using concurrency? Can the time be modeled in a discrete fashion (as described for the Observer pattern)? I didn't find any example of that. Is it possible to model something like the Observer patern using the standard FRP conventions? Are these situations inside the aims of FRP? I think the Observer pattern and FRP are related somehow, but I cannot figure out how is this. Alvaro.

Observer patern using the standard FRP conventions? Are these situations inside the aims of FRP? There are several flavours of FRP that approach reactivity from different angles. I'd say Grapefruit is the one most relevant to the Observer pattern, since it models complex systems as a network of interconnected circuits, where circuits are effectful entities, and they can communicate through both discrete and continuous signals. The other system that might be relevant is Yampa, since you model entities as stateful signal functions (but unlike in Grapefruit they cannot perform side effects), and connect them however you want. Incidentally, both of
Hi Alvaro, these approaches are arrow based. In contrast, Reactive aims to describe the (output over the) whole lifetime of an entity as a pure value. Dependencies between entities are established simply by defining one as a function of the other, and mutual dependencies are naturally allowed. I don't think there's any meaningful way to connect the Observer pattern to that. In fact, the basic OO design patterns are often meaningless in functional programming, because it's a completely different world. Gergely -- http://www.fastmail.fm - Or how I learned to stop worrying and love email again

Observer patern using the standard FRP conventions? Are these situations inside the aims of FRP? There are several flavours of FRP that approach reactivity from different angles. I'd say Grapefruit is the one most relevant to the Observer pattern, since it models complex systems as a network of interconnected circuits, where circuits are effectful entities, and they can communicate through both discrete and continuous signals. The other system that might be relevant is Yampa, since you model entities as stateful signal functions (but unlike in Grapefruit they cannot perform side effects), and connect them however you want. Incidentally, both of
Hi,
When talking about the Observer pattern I was thinking in the case that
appears in the reactive programming entry in Wikipedia (
http://en.wikipedia.org/wiki/Reactive_programming).
If we have a reactive setting then the sentence
a := b + c
could mean that there is a dynamic data flow from the values b and c to the
value a, so whenever b or c change, a is automatically changed.
This has similarities to the OO Observer pattern (in fact, you can implement
it using the pattern) and is also supported in some new scripting languages
as JavaFX.
It seems that some goals of reactive programming are shared with the goals
of the Observer pattern. Are they actually related? Maybe this relationship
is between the pattern and reactive programming in general, not FRP. Is this
so or is Wikipedia just wrong about that? Anyway, is it possible to
implement something alike the a := b + c case using Reactive or any oher FRP
package? Can someone show any example?
Thanks,
Alvaro.
---------- Forwarded message ----------
From: Patai Gergely

Basically this is also referred to as "push" versus "pull". The way I
understand it: pushing means that when a source (= the real inputs, aka
sensor, e.g. mouse) changes, it notifies its dependencies about this change.
At the lowest level pushes are caused by interrupts, at the high level this
is typically represented in OO by subject/observer (aka signals/slots, aka
events in .NET). Pulling means that you read a value from the final output
"node" (e.g. the voltage of a motor or a 3D object to be displayed on the
screen), this node reads its inputs, until it samples the value of a sensor.
Pulling has the advantage that only those values that are needed are
computed, but has the disadvantage that all these values will always be
recomputed.
Pushing has that advantage that only those values that change cause other
values to be updated, but has the disadvantage that the values that are
updated are actually not needed in the output, or that other changes can
cause redundant updates.
I believe Reactive actually tries to find a good balance between push and
pull: a reactive value (which basically is a value sampled at discrete
points in time) is only recomputed when any of its inputs changes.
Grapefruit also does this but uses a completely different approach.
My personal opinion is that you cannot statically determine which strategy
is best (and where to insert nodes that cache values), one would need to do
perform profiling to measure the change frequency of sensors (and
dependencies) in a typical scenario. Of course sensors could have an
estimated change frequency that would help at compile time. For example when
playing a videogame the change frequency of the analog stick is very high,
while that of the joypad buttons is much lower.
2009/6/9 Álvaro García Pérez
Hi,
When talking about the Observer pattern I was thinking in the case that appears in the reactive programming entry in Wikipedia ( http://en.wikipedia.org/wiki/Reactive_programming).
If we have a reactive setting then the sentence a := b + c could mean that there is a dynamic data flow from the values b and c to the value a, so whenever b or c change, a is automatically changed.
This has similarities to the OO Observer pattern (in fact, you can implement it using the pattern) and is also supported in some new scripting languages as JavaFX.
It seems that some goals of reactive programming are shared with the goals of the Observer pattern. Are they actually related? Maybe this relationship is between the pattern and reactive programming in general, not FRP. Is this so or is Wikipedia just wrong about that? Anyway, is it possible to implement something alike the a := b + c case using Reactive or any oher FRP package? Can someone show any example?
Thanks,
Alvaro.
---------- Forwarded message ---------- From: Patai Gergely
Date: 2009/6/9 Subject: Re: [reactive] FRP, continuous time and concurrency To: Álvaro García Pérez , reactive@haskell.org Hi Alvaro,
Observer patern using the standard FRP conventions? Are these situations inside the aims of FRP? There are several flavours of FRP that approach reactivity from different angles. I'd say Grapefruit is the one most relevant to the Observer pattern, since it models complex systems as a network of interconnected circuits, where circuits are effectful entities, and they can communicate through both discrete and continuous signals. The other system that might be relevant is Yampa, since you model entities as stateful signal functions (but unlike in Grapefruit they cannot perform side effects), and connect them however you want. Incidentally, both of these approaches are arrow based.
In contrast, Reactive aims to describe the (output over the) whole lifetime of an entity as a pure value. Dependencies between entities are established simply by defining one as a function of the other, and mutual dependencies are naturally allowed. I don't think there's any meaningful way to connect the Observer pattern to that. In fact, the basic OO design patterns are often meaningless in functional programming, because it's a completely different world.
Gergely
-- http://www.fastmail.fm - Or how I learned to stop worrying and love email again
_______________________________________________ Reactive mailing list Reactive@haskell.org http://www.haskell.org/mailman/listinfo/reactive

I believe there's also an issue of pull potentially creating large space
leaks (where some input accumulates because nobody is pulling it), as well
as perhaps aggregating processing time that will be paid at a single point
when the value is actually pulled.
Eyal
2009/6/9 Peter Verswyvelen
Basically this is also referred to as "push" versus "pull". The way I understand it: pushing means that when a source (= the real inputs, aka sensor, e.g. mouse) changes, it notifies its dependencies about this change. At the lowest level pushes are caused by interrupts, at the high level this is typically represented in OO by subject/observer (aka signals/slots, aka events in .NET). Pulling means that you read a value from the final output "node" (e.g. the voltage of a motor or a 3D object to be displayed on the screen), this node reads its inputs, until it samples the value of a sensor. Pulling has the advantage that only those values that are needed are computed, but has the disadvantage that all these values will always be recomputed.
Pushing has that advantage that only those values that change cause other values to be updated, but has the disadvantage that the values that are updated are actually not needed in the output, or that other changes can cause redundant updates.
I believe Reactive actually tries to find a good balance between push and pull: a reactive value (which basically is a value sampled at discrete points in time) is only recomputed when any of its inputs changes. Grapefruit also does this but uses a completely different approach.
My personal opinion is that you cannot statically determine which strategy is best (and where to insert nodes that cache values), one would need to do perform profiling to measure the change frequency of sensors (and dependencies) in a typical scenario. Of course sensors could have an estimated change frequency that would help at compile time. For example when playing a videogame the change frequency of the analog stick is very high, while that of the joypad buttons is much lower.
2009/6/9 Álvaro García Pérez
Hi,
When talking about the Observer pattern I was thinking in the case that appears in the reactive programming entry in Wikipedia ( http://en.wikipedia.org/wiki/Reactive_programming).
If we have a reactive setting then the sentence a := b + c could mean that there is a dynamic data flow from the values b and c to the value a, so whenever b or c change, a is automatically changed.
This has similarities to the OO Observer pattern (in fact, you can implement it using the pattern) and is also supported in some new scripting languages as JavaFX.
It seems that some goals of reactive programming are shared with the goals of the Observer pattern. Are they actually related? Maybe this relationship is between the pattern and reactive programming in general, not FRP. Is this so or is Wikipedia just wrong about that? Anyway, is it possible to implement something alike the a := b + c case using Reactive or any oher FRP package? Can someone show any example?
Thanks,
Alvaro.
---------- Forwarded message ---------- From: Patai Gergely
Date: 2009/6/9 Subject: Re: [reactive] FRP, continuous time and concurrency To: Álvaro García Pérez , reactive@haskell.org Hi Alvaro,
Observer patern using the standard FRP conventions? Are these situations inside the aims of FRP? There are several flavours of FRP that approach reactivity from different angles. I'd say Grapefruit is the one most relevant to the Observer pattern, since it models complex systems as a network of interconnected circuits, where circuits are effectful entities, and they can communicate through both discrete and continuous signals. The other system that might be relevant is Yampa, since you model entities as stateful signal functions (but unlike in Grapefruit they cannot perform side effects), and connect them however you want. Incidentally, both of these approaches are arrow based.
In contrast, Reactive aims to describe the (output over the) whole lifetime of an entity as a pure value. Dependencies between entities are established simply by defining one as a function of the other, and mutual dependencies are naturally allowed. I don't think there's any meaningful way to connect the Observer pattern to that. In fact, the basic OO design patterns are often meaningless in functional programming, because it's a completely different world.
Gergely
-- http://www.fastmail.fm - Or how I learned to stop worrying and love email again
_______________________________________________ Reactive mailing list Reactive@haskell.org http://www.haskell.org/mailman/listinfo/reactive
_______________________________________________ Reactive mailing list Reactive@haskell.org http://www.haskell.org/mailman/listinfo/reactive

I don't think the push/pull issue is relevant to the discussion about frp and the OO observer pattern. The push/pull issue is an (important) implementation detail of FRP. Le 9 juin 09 à 19:09, Álvaro García Pérez a écrit :
This has similarities to the OO Observer pattern (in fact, you can implement it using the pattern) and is also supported in some new scripting languages as JavaFX.
One thing the observer pattern doesn't give you is any guarantee on the order of updates: when the observed value changes it updates all the observers in no specified order. However if a value observes more than one value this may result in glitches (i.e. values you actually don't want to see). For example suppose your value dependencies are as follows : a = b + c b = c + 1 i.e. a observes b and c, and b observes c and initially we have : c = 0 b = 1 a = 1 If c updates from 0 to 2 any of the following two sequences of updates may be seen with the OO observer pattern : c = 2; a = 3; b = 2; a = 4; c = 2; b = 2; a = 4; But usually you don't want to see the a = 3, it's a glitch. FRP systems update the graph of dependencies in topological order (i.e. they ensure before updating a value that each value it depends on has been updated) and you are guaranteed you'll only see the second sequence of updates. FRP can be seen as a form of value observation in the sense that changes in a value get eventually propagated to other values that depend on it. But it is clearly not the same as the OO observer pattern as usually understood/implemented because of this ordering issue. FRP is more subtle and powerful in the management of the value's dependency graph. Best, Daniel

While I don't agree that push/pull is not relevant, I totally agree on the
glitches of the observer pattern you described here, which IMO make the
observer pattern only really practical in a model/view setting, where the
view is observer and the model is the subject.
2009/6/9 Daniel Bünzli
I don't think the push/pull issue is relevant to the discussion about frp and the OO observer pattern. The push/pull issue is an (important) implementation detail of FRP.
Le 9 juin 09 à 19:09, Álvaro García Pérez a écrit :
This has similarities to the OO Observer pattern (in fact, you can
implement it using the pattern) and is also supported in some new scripting languages as JavaFX.
One thing the observer pattern doesn't give you is any guarantee on the order of updates: when the observed value changes it updates all the observers in no specified order. However if a value observes more than one value this may result in glitches (i.e. values you actually don't want to see).
For example suppose your value dependencies are as follows :
a = b + c b = c + 1
i.e. a observes b and c, and b observes c and initially we have :
c = 0 b = 1 a = 1
If c updates from 0 to 2 any of the following two sequences of updates may be seen with the OO observer pattern :
c = 2; a = 3; b = 2; a = 4; c = 2; b = 2; a = 4;
But usually you don't want to see the a = 3, it's a glitch. FRP systems update the graph of dependencies in topological order (i.e. they ensure before updating a value that each value it depends on has been updated) and you are guaranteed you'll only see the second sequence of updates.
FRP can be seen as a form of value observation in the sense that changes in a value get eventually propagated to other values that depend on it. But it is clearly not the same as the OO observer pattern as usually understood/implemented because of this ordering issue. FRP is more subtle and powerful in the management of the value's dependency graph.
Best,
Daniel_______________________________________________
Reactive mailing list Reactive@haskell.org http://www.haskell.org/mailman/listinfo/reactive

I agree that there are some issues about the cyclic dependencies with the
observer pattern, but I think that considering it a glitch or not is a
matter of phylosophical discussion. In Daniel's example you will end with
the proper value in "a" (even if you have an inproper value for some
negligible instants). You only may consider this a problem if your system
has poles near those values, which can turn it into a diverging system, and
if the updating period is big enough to trigger those diverging effects. For
almost any non real-time-control system this is not alike to happen. And of
course, you can implement your own Observer patern imposing some policies in
the update ordering, which can help to avoid the glitches.
Anyway, can you give any implementation of this example using the reactive
library? I'm interested in how to code this, as far as I don't know the
paradigm and the libraries. Can someone sketch a code where the reactive
variables (a := b + c ...) or something alike is implemented? Even a trivial
example will fit me, I just want to know how to translate this into code.
Thanks,
Alvaro.
2009/6/9 Daniel Bünzli
I don't think the push/pull issue is relevant to the discussion about frp and the OO observer pattern. The push/pull issue is an (important) implementation detail of FRP.
Le 9 juin 09 à 19:09, Álvaro García Pérez a écrit :
This has similarities to the OO Observer pattern (in fact, you can
implement it using the pattern) and is also supported in some new scripting languages as JavaFX.
One thing the observer pattern doesn't give you is any guarantee on the order of updates: when the observed value changes it updates all the observers in no specified order. However if a value observes more than one value this may result in glitches (i.e. values you actually don't want to see).
For example suppose your value dependencies are as follows :
a = b + c b = c + 1
i.e. a observes b and c, and b observes c and initially we have :
c = 0 b = 1 a = 1
If c updates from 0 to 2 any of the following two sequences of updates may be seen with the OO observer pattern :
c = 2; a = 3; b = 2; a = 4; c = 2; b = 2; a = 4;
But usually you don't want to see the a = 3, it's a glitch. FRP systems update the graph of dependencies in topological order (i.e. they ensure before updating a value that each value it depends on has been updated) and you are guaranteed you'll only see the second sequence of updates.
FRP can be seen as a form of value observation in the sense that changes in a value get eventually propagated to other values that depend on it. But it is clearly not the same as the OO observer pattern as usually understood/implemented because of this ordering issue. FRP is more subtle and powerful in the management of the value's dependency graph.
Best,
Daniel

Anyway, can you give any implementation of this example using the reactive library? If b and c are signals (or behaviours as they are called in Reactive) carrying Num values of the same type, you can simply say a = b + c, and you're done. Signal a will be updated only when either b or c is updated. Note that this must be understood in the context of laziness, i.e. not a single sum is calculated until a sample of a is requested.
Gergely -- http://www.fastmail.fm - One of many happy users: http://www.fastmail.fm/docs/quotes.html

So, it may be that we've made Num a => Behavior a an instance of Num in
which case this is valid code; I think the definition
a = liftA2 (+) b c
is more instructive. The point is that Behavior is an instance of
Applicative, so we can apply a time-varying function (such as (+) b) to a
time-varying argument (such as c) so that the answer is modified when either
the function or the argument is.
Freddie
2009/6/10 Patai Gergely
Anyway, can you give any implementation of this example using the reactive library? If b and c are signals (or behaviours as they are called in Reactive) carrying Num values of the same type, you can simply say a = b + c, and you're done. Signal a will be updated only when either b or c is updated. Note that this must be understood in the context of laziness, i.e. not a single sum is calculated until a sample of a is requested.
Gergely
-- http://www.fastmail.fm - One of many happy users: http://www.fastmail.fm/docs/quotes.html
_______________________________________________ Reactive mailing list Reactive@haskell.org http://www.haskell.org/mailman/listinfo/reactive

I don't completely understand how can you wrap your reactive definition into
a particular implementation.
Let's take the IO legacy adapter for example, how could I use the
applicative lifting (liftA2) with behaviours to implement things inside the
IO monad? Can you give some code adapting the "a = liftA2 (+) b c" example
to the console? Are threads and concurrency required to do so?
Alvaro.
2009/6/10 Freddie Manners
So, it may be that we've made Num a => Behavior a an instance of Num in which case this is valid code; I think the definition
a = liftA2 (+) b c
is more instructive. The point is that Behavior is an instance of Applicative, so we can apply a time-varying function (such as (+) b) to a time-varying argument (such as c) so that the answer is modified when either the function or the argument is.
Freddie
2009/6/10 Patai Gergely
Anyway, can you give any implementation of this example using the
reactive library? If b and c are signals (or behaviours as they are called in Reactive) carrying Num values of the same type, you can simply say a = b + c, and you're done. Signal a will be updated only when either b or c is updated. Note that this must be understood in the context of laziness, i.e. not a single sum is calculated until a sample of a is requested.
Gergely
-- http://www.fastmail.fm - One of many happy users: http://www.fastmail.fm/docs/quotes.html
_______________________________________________ Reactive mailing list Reactive@haskell.org http://www.haskell.org/mailman/listinfo/reactive
_______________________________________________ Reactive mailing list Reactive@haskell.org http://www.haskell.org/mailman/listinfo/reactive

This is a silly example. Console lines "b = x" update the value of b; "c =
y" likewise; lines starting "a" cause the current value of a to be printed.
module Main
where
import FRP.Reactive
import FRP.Reactive.LegacyAdapters
import Data.List
import Control.Monad
import Control.Concurrent
import Control.Applicative
parseEvent :: String -> Event String -> Event Integer
parseEvent s = fmap read . joinMaybes . fmap (stripPrefix s)
main :: IO ()
main = do
cl <- makeClock
(s,e) <- makeEvent cl
forkIO . forever $ getLine >>= s
let b = stepper 0 $ parseEvent "b =" e
let c = stepper 0 $ parseEvent "c =" e
let p = parseEvent "a" e
let a = liftA2 (+) b c -- the only interesting line
adaptE . fmap print $ snapshot_ a p
So yes, this does use explicit concurrency because "feeding" the reactive
events (with getLine) and printing the answers must happen in different
threads.
Interestingly, this fairly simple program gobbles CPU and RAM on
reactive-0.11, as well as running with a bit of a lag. Could joinMaybes be
to blame? I don't know how happy the Monad instance of Event is these days.
Freddie
2009/6/10 Álvaro García Pérez
I don't completely understand how can you wrap your reactive definition into a particular implementation.
Let's take the IO legacy adapter for example, how could I use the applicative lifting (liftA2) with behaviours to implement things inside the IO monad? Can you give some code adapting the "a = liftA2 (+) b c" example to the console? Are threads and concurrency required to do so?
Alvaro.
2009/6/10 Freddie Manners
So, it may be that we've made Num a => Behavior a an instance of Num in
which case this is valid code; I think the definition
a = liftA2 (+) b c
is more instructive. The point is that Behavior is an instance of Applicative, so we can apply a time-varying function (such as (+) b) to a time-varying argument (such as c) so that the answer is modified when either the function or the argument is.
Freddie
2009/6/10 Patai Gergely
Anyway, can you give any implementation of this example using the
reactive library? If b and c are signals (or behaviours as they are called in Reactive) carrying Num values of the same type, you can simply say a = b + c, and you're done. Signal a will be updated only when either b or c is updated. Note that this must be understood in the context of laziness, i.e. not a single sum is calculated until a sample of a is requested.
Gergely
-- http://www.fastmail.fm - One of many happy users: http://www.fastmail.fm/docs/quotes.html
_______________________________________________ Reactive mailing list Reactive@haskell.org http://www.haskell.org/mailman/listinfo/reactive
_______________________________________________ Reactive mailing list Reactive@haskell.org http://www.haskell.org/mailman/listinfo/reactive

2009/6/10 Freddie Manners
This is a silly example. Console lines "b = x" update the value of b; "c = y" likewise; lines starting "a" cause the current value of a to be printed.
module Main where
import FRP.Reactive import FRP.Reactive.LegacyAdapters import Data.List import Control.Monad import Control.Concurrent import Control.Applicative
parseEvent :: String -> Event String -> Event Integer parseEvent s = fmap read . joinMaybes . fmap (stripPrefix s)
main :: IO () main = do cl <- makeClock (s,e) <- makeEvent cl forkIO . forever $ getLine >>= s let b = stepper 0 $ parseEvent "b =" e let c = stepper 0 $ parseEvent "c =" e let p = parseEvent "a" e let a = liftA2 (+) b c -- the only interesting line
adaptE . fmap print $ snapshot_ a p
So yes, this does use explicit concurrency because "feeding" the reactive events (with getLine) and printing the answers must happen in different threads.
Interestingly, this fairly simple program gobbles CPU and RAM on reactive-0.11, as well as running with a bit of a lag. Could joinMaybes be to blame? I don't know how happy the Monad instance of Event is these days.
"Fundamentally broken" about covers it. Well, to be specific, joinE is broken, and looks hard to fix. The Monoid instance for Event is also broken, but I think only when all Events involved are finite. Further, I was trying to fix it, but GHC is broken. I'd also like to note that LegacyAdapters is broken. I've got a fix for the broken bits, which happens to break everything else. Blocked on another GHC bug, though. ..until further notice, just assume "broken". -- Svein Ove Aas

..until further notice, just assume "broken".
Useful to know. I shall postpone its use in critical projects.
BTW, s/joinMaybes/justE works quickly, in minimal CPU and memory. Forgot
about that function. Curious that the operational semantics have become so
odd though; I'm used to denotational ones being off -- late updates and so
forth -- but I haven't come across quite that level of resource consumption
in reactive code before.
Freddie
2009/6/10 Svein Ove Aas
This is a silly example. Console lines "b = x" update the value of b; "c = y" likewise; lines starting "a" cause the current value of a to be
2009/6/10 Freddie Manners
: printed. module Main where
import FRP.Reactive import FRP.Reactive.LegacyAdapters import Data.List import Control.Monad import Control.Concurrent import Control.Applicative
parseEvent :: String -> Event String -> Event Integer parseEvent s = fmap read . joinMaybes . fmap (stripPrefix s)
main :: IO () main = do cl <- makeClock (s,e) <- makeEvent cl forkIO . forever $ getLine >>= s let b = stepper 0 $ parseEvent "b =" e let c = stepper 0 $ parseEvent "c =" e let p = parseEvent "a" e let a = liftA2 (+) b c -- the only interesting line
adaptE . fmap print $ snapshot_ a p
So yes, this does use explicit concurrency because "feeding" the reactive events (with getLine) and printing the answers must happen in different threads.
Interestingly, this fairly simple program gobbles CPU and RAM on reactive-0.11, as well as running with a bit of a lag. Could joinMaybes
be
to blame? I don't know how happy the Monad instance of Event is these days.
"Fundamentally broken" about covers it.
Well, to be specific, joinE is broken, and looks hard to fix. The Monoid instance for Event is also broken, but I think only when all Events involved are finite.
Further, I was trying to fix it, but GHC is broken.
I'd also like to note that LegacyAdapters is broken. I've got a fix for the broken bits, which happens to break everything else. Blocked on another GHC bug, though.
..until further notice, just assume "broken".
-- Svein Ove Aas

Le 10 juin 09 à 12:20, Álvaro García Pérez a écrit :
I agree that there are some issues about the cyclic dependencies with the observer pattern,
Note it is not a problem about _cyclic_ dependencies, the dependencies I have shown is a directed acyclic graph. Cyclic dependencies are yet another issue you can solve for example with fixed points and infinitesimal delays but it's a different problem.
but I think that considering it a glitch or not is a matter of phylosophical discussion.
The semantics you really want is no glitches (= instantaneous propagation times aka synchrony hypothesis). Because you want to think about your values as being for all t : a(t) = b(t) + c(t) and you can't do that if you allow the glitches to occur. You do the same kind of reasoning in electronics when you introduce latches and clocks.
In Daniel's example you will end with the proper value in "a" (even if you have an inproper value for some negligible instants). You only may consider this a problem if your system has poles near those values, which can turn it into a diverging system, and if the updating period is big enough to trigger those diverging effects. For almost any non real-time-control system this is not alike to happen.
You may think it's marginal but it's not: you get into problems as soon as your update functions may performs some kind of side effects (e.g. write something to a file). A friend of mine ran exactly into the problem I described with the observer pattern in Cocoa, all the glitches triggered expensive and unused graphical updates and were ruining the interactive experience with the system. Eventually he had to side-step the observation mechanism for certain parts of his system.
And of course, you can implement your own Observer patern imposing some policies in the update ordering, which can help to avoid the glitches.
Yes, you'll end up implementing an frp system. You'll see this will be quite different from an observer pattern implementation because you need more contextual information about the observers to perform the update of the dependents under a synchrony hypothesis. Best, Daniel

I'm trying to implement something like the Observer pattern in a functional setting. Some people told me that maybe I will find what I want in FRP. The Observer pattern enforces the propagation of changes [...]
One way to relate the Observer pattern (OP) to FRP is to view OP as an imperative implementation of something like part of the functional semantics of FRP. Specifically, a *push*-based implementation (in contrast to most previous FRP implementations).
From this viewpoint, I suggest changing your goal. Instead of "trying to implement something like the Observer pattern in a functional setting", you might try to identify a *functional* idea that serves the same purposes as OP (and implement it however you want, e.g. push and/or pull). Or you might use OP as a (hidden, imperative) implementation of some functional idea. Likely both.
Unless you're terrifically careful, I expect OP will lead you to incorrect
implementations of FRP (which is fine if you're not trying to implement FRP
or something else with simple & precise semantics).
I first tried to implement FRP via OP (push) in 1998 (see ref in
http://conal.net/papers/simply-reactive/), and I was unable to get the FRP
semantics correct at the time. Coming home from ICFP 2007, after chatting
with Mike Sperber, I got the idea for Reactive. The implementation approach
in Reactive is not classical push. It's a highly-multithreaded pull, in
which most threads are blocked most of the time. I had always thought that
push==data-driven, and pull==demand-driven, but Reactive is data-driven pull
(made possible by multi-threading). The threading is hidden inside the
implementation of unamb.
Regards, - Conal
2009/6/8 Álvaro García Pérez
Hello,
I'm trying to implement something like the Observer pattern in a functional setting. Some people told me that maybe I will find what I want in FRP. The Observer pattern enforces the propagation of changes in the subject to the observers, but in a way where concurrency is not considered, through the notify-update mechanism. Time is not considered because the updating methods are invoked in a synchronous way. Other way to say it is that time is a discrete value which is incremented by one with any subject change and the observers use a pooling strategy in every instance of time. The subject and the observers operate in the same thread.
As far as I've seen (but I'm not sure if I'm right) FRP models behaviours and time as continuous entities (although practical implementations use discrete time and discrete sampling in event detection). You must represent the time explicitly and concurrency is used to model behaviors and events. Is there any interpretation of FRP without using concurrency? Can the time be modeled in a discrete fashion (as described for the Observer pattern)? I didn't find any example of that. Is it possible to model something like the Observer patern using the standard FRP conventions? Are these situations inside the aims of FRP?
I think the Observer pattern and FRP are related somehow, but I cannot figure out how is this.
Alvaro.
_______________________________________________ Reactive mailing list Reactive@haskell.org http://www.haskell.org/mailman/listinfo/reactive
participants (8)
-
Conal Elliott
-
Daniel Bünzli
-
Eyal Lotem
-
Freddie Manners
-
Patai Gergely
-
Peter Verswyvelen
-
Svein Ove Aas
-
Álvaro García Pérez