
Hi everyone, I'm pleased to announce Elerea, aka "Eventless reactivity", a minimalistic FRP implementation that - comes with a convenient applicative interface (similar to Reactive) - supports recursive definition of signals - supports signals fed from outside by IO actions - supports dynamism in the signal structure (I think ;) - seems to play nice with resources, especially memory - is based on some unsafePerformIO dark magic (that might easily break depending on many factors) - might have some parallelisation potential - has absolutely no formal foundations, it's just the result of some furious hacking over the last few days! There are working examples to show off the current capabilities of the library, found in the separate elerea-examples package. Have fun playing with it! Gergely -- http://www.fastmail.fm - A no graphics, no pop-ups email service

I've seen alot of FRP libraries come up, and I'm always left with the question, "Where the heck are the FRP tutorials?" I'm talking about the bare-bones, "I've-never-even-touched-this-stuff-before" kind of tutorial. Something that explains the general theory and provides a few simple applications, maybe the start of a bigger one or something to mess around with and actually learn how to use FRP. The notion seems interesting, and perhaps I just haven't googled hard enough, but I can't really seem to find a good, newbie-level tutorial on it. Can anyone aim me in the right direction? Patai Gergely wrote:
Hi everyone,
I'm pleased to announce Elerea, aka "Eventless reactivity", a minimalistic FRP implementation that
- comes with a convenient applicative interface (similar to Reactive) - supports recursive definition of signals - supports signals fed from outside by IO actions - supports dynamism in the signal structure (I think ;) - seems to play nice with resources, especially memory - is based on some unsafePerformIO dark magic (that might easily break depending on many factors) - might have some parallelisation potential - has absolutely no formal foundations, it's just the result of some furious hacking over the last few days!
There are working examples to show off the current capabilities of the library, found in the separate elerea-examples package. Have fun playing with it!
Gergely

Joe Fredette
I've seen alot of FRP libraries come up, and I'm always left with the question, "Where the heck are the FRP tutorials?"
I'm talking about the bare-bones, "I've-never-even-touched-this-stuff-before" kind of tutorial. Something that explains the general theory and provides a few simple applications, maybe the start of a bigger one or something to mess around with and actually learn how to use FRP.
The notion seems interesting, and perhaps I just haven't googled hard enough, but I can't really seem to find a good, newbie-level tutorial on it.
Can anyone aim me in the right direction?
http://haskell.org/haskellwiki/Reactive/Tutorial/A_FPS_display Mind you, frp libraries differ quite a lot and using them might require knowledge of their innards, but the main thing to wrap your head around is looking at the points instead of the whole "update loop", as frp abstracts away the time. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Awesome, Sir, you are a gentleman and a scholar. Thanks. /Joe Achim Schneider wrote:
Joe Fredette
wrote: I've seen alot of FRP libraries come up, and I'm always left with the question, "Where the heck are the FRP tutorials?"
I'm talking about the bare-bones, "I've-never-even-touched-this-stuff-before" kind of tutorial. Something that explains the general theory and provides a few simple applications, maybe the start of a bigger one or something to mess around with and actually learn how to use FRP.
The notion seems interesting, and perhaps I just haven't googled hard enough, but I can't really seem to find a good, newbie-level tutorial on it.
Can anyone aim me in the right direction?
http://haskell.org/haskellwiki/Reactive/Tutorial/A_FPS_display
Mind you, frp libraries differ quite a lot and using them might require knowledge of their innards, but the main thing to wrap your head around is looking at the points instead of the whole "update loop", as frp abstracts away the time.

I've seen alot of FRP libraries come up, and I'm always left with the question, "Where the heck are the FRP tutorials?" Writing a good tutorial takes a lot of time, but I made some example code precisely to show how the library can be used. The best way to learn is to start playing with them. The "chase" example is about as small as it gets, and the breakout one is generally a nice test case for such a library, as it exercises the code as well as provides a reasonably complex example for the user to study.
The basic idea is indeed what Achim said: we refer to the whole lifetime of some time-varying quantity (what I call a signal, but the name behaviour is also often used for this concept in other systems) with a single name. For instance, mousePosition is a two-dimensional vector whose value follows the position of the mouse at any time. The applicative interface means that you can combine signals using ordinary point-wise functions by simple lifting. To keep with the example, something like drawUnitRectangle <$> mousePosition would result in a signal whose points are IO actions that draw the rectangle where the mouse is at the moment you sample them. The drawUnitRectangle function doesn't need to have anything reactive in it, it just takes an ordinary pair (or whatever you represent your vectors with) and does its job. In the end, a program working with signals looks very much like one calculating a snapshot of the system and the state changes at the given moment, except you need to insert <*>'s and lift values where applicable. Gergely -- http://www.fastmail.fm - mmm... Fastmail...

To keep with the example, something like drawUnitRectangle <$> mousePosition would result in a signal whose points are IO actions that draw the rectangle where the mouse is at the moment you sample them.
And just as IO is unnecessary for behavior (functions of time), it's also
unnecessary for imagery (functions of space). Continuing with the
functional (non-IO) theme, you can give a semantically precise, composable
and simple type of images.
- Conal
2009/4/10 Patai Gergely
I've seen alot of FRP libraries come up, and I'm always left with the question, "Where the heck are the FRP tutorials?" Writing a good tutorial takes a lot of time, but I made some example code precisely to show how the library can be used. The best way to learn is to start playing with them. The "chase" example is about as small as it gets, and the breakout one is generally a nice test case for such a library, as it exercises the code as well as provides a reasonably complex example for the user to study.
The basic idea is indeed what Achim said: we refer to the whole lifetime of some time-varying quantity (what I call a signal, but the name behaviour is also often used for this concept in other systems) with a single name. For instance, mousePosition is a two-dimensional vector whose value follows the position of the mouse at any time. The applicative interface means that you can combine signals using ordinary point-wise functions by simple lifting. To keep with the example, something like drawUnitRectangle <$> mousePosition would result in a signal whose points are IO actions that draw the rectangle where the mouse is at the moment you sample them. The drawUnitRectangle function doesn't need to have anything reactive in it, it just takes an ordinary pair (or whatever you represent your vectors with) and does its job.
In the end, a program working with signals looks very much like one calculating a snapshot of the system and the state changes at the given moment, except you need to insert <*>'s and lift values where applicable.
Gergely
-- http://www.fastmail.fm - mmm... Fastmail...
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

And just as IO is unnecessary for behavior (functions of time), it's also unnecessary for imagery (functions of space). Continuing with the functional (non-IO) theme, you can give a semantically precise, composable and simple type of images. Yes, that's a further separation of concerns. Instead of producing an IO action, you produce a data structure to be consumed by the framework. However, there's an interesting question of dealing with feedbacks, i.e. signals affected (or even created and destroyed) by the IO actions resulting from your output. It might sound like too low-level design if you have to rely on IO-bound feedback, but I don't see how it could be put behind a nice abstraction in general. For instance, if you have some large terrain to roam around, you'll have to keep most of it on the disk (static as well as dynamic objects), and it depends on your location and orientation which parts get loaded and discarded/archived at any time. While low-level caching and loading of resources is certainly not the responsibility of the reactive subsystem, it will still have to manage the life cycle of signals. And that's what I'm missing: a nice way to describe this kind of feedback. Grapefruit looks like a possible solution to this problem (assuming it can be made completely dynamic), and then the integration of a Grapefruit-like and a Reactive-like system could be the ultimate solution in the long run.
Gergely -- http://www.fastmail.fm - IMAP accessible web-mail

I wonder if this breaks referential transparency. Say, you define a signal s and use s twice in some expression. s may be evaluated once and it maybe evaluated twice. Does this make a difference? I'm not sure about the answer at the moment, but I believe we'd simply get two identically behaving signals in most cases, assuming they are evaluated in the same superstep. The automatic insertion of delays into cycles might cause problems here, as it depends on the current topology of the dataflow network what exactly happens. However, if all the cycles are broken with explicit delays, this is not an issue any more. It would break everything if IO-fed signals were created multiple times, but that can't happen, since their construction is explicitly marked by IO.
Is it possible that thereby the second “instance” has different values than the first one? I can't exclude the possibility, unfortunately, especially with the delay magic. But this is a question that needs further investigation.
When are your signals started? At evaluation time? Yes. Evaluation returns an IORef holding the initial state of the signal, wrapped in some structure. The top-level network is traversed at the beginning, so its nodes are all evaluated just once.
If yes, doesn’t this mean that the meaning of a signal depends on evaluation time so that evaluating the same signal expression twice actually results in two different signals? That's exactly the case. The system heavily relies on sharing in order to work properly. In fact, the ability to keep track of sharing under a pure-looking interface was the main motivation to choose this particular implementation. As far as I can tell, it should be fine as long as no Elerea primitive (signal constructors defined in Internal) is inlined. But I don't like this either, and I'd love to hear how the same effect can be achieved without resorting to such fragile constructs. Unfortunately, all my previous attempts failed at one point or another...
So Elerea seems to have to take special care to not break referential transparency. No matter how I look at it, that would make it a completely different library. One that's very close to Reactive, in fact.
Elerea’s evaluation seems to be driven by hand-coded IO actions so that use of such compiler optimizations is certainly not possible. This is quite true. On the other hand, most calculation is happening in pure functions anyway, and reactive book-keeping probably constitutes only a small fragment of the runtime in a typical application, so this is less of a worry for me. In the end, I see FRP as an alternative way to manage entities, and entities surely can't be optimised away. I use applicative laws to unite pure parts as much as possible, although I don't expect the resulting functions to be as efficient as if they had been assembled at compile time. A JIT compiler surely wouldn't hurt. :)
Grapefruit does this using era type parameters. Elerea doesn’t seem to do anything about this at the moment. Apart from counting on conditions mentioned above, no. The intended meaning is that new signals are created whenever a latcher asks for one, but there's no way to enforce that for the time being.
Patai, could you please correct me where I’m wrong and clarify the points which are still unclear to me? It seems I can hardly say anything new, because you see all the issues perfectly. (By the way, Patai is my family name. Different endianness over here. ;)
What do you think, Grapefruit is lacking, compared to Reactive? Oh, it's just my subjective opinion that I find the interface of Reactive very easy to use and flexible. That's primarily what I wanted to replicate in Elerea, I just ended up getting rid of events altogether, as I didn't need them, and that gave me a simpler system to play with. Grapefruit looks like something that feels more natural when describing the system at a higher level. But I really need to play with it more to have a well-founded opinion.
Gergely -- http://www.fastmail.fm - mmm... Fastmail...

"Patai Gergely"
...
I don't think using dirty tricks to implement FRP deserves flak, at all, from my POV, it sounds like complaining that the IO monad is implemented using C... meaning that if you're that close to bare thunks, you have every right to use any means necessary to make them behave properly. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Am Mittwoch, 15. April 2009 09:03 schrieb Achim Schneider:
I don't think using dirty tricks to implement FRP deserves flak, at all, from my POV, it sounds like complaining that the IO monad is implemented using C... meaning that if you're that close to bare thunks, you have every right to use any means necessary to make them behave properly.
It depends. Using unsafe stuff internally, might be acceptable and sometimes necessary. I also use unsafePerformIO in Grapefruit for implementing CSignals although I’m not very comfortable with this. On the other hand, breaking referential transparency in the external interface is a very bad idea, in my opinion. Actually, this means that the library user would have to turn certain compiler optimizations off to get the intended behavior. Just have a look at the Haddock docs of unsafePerformIO. In my earlier years of Haskell programming, I thought that unsafePerformIO is not too bad but I had to discover that it can quickly lead to a catastrophe. Best wishes, Wolfgang

If you're interested in the history of FRP (which I think isn't too bad) you
could read
- the book "Haskell School of Expression http://www.haskell.org/soe/ ",
which contains a good introduction to classical FRP.
- "The Yampa Arcadehttp://haskell.cs.yale.edu/yale/papers/haskell-workshop03/yampa-arcade.pdf"
paper, to get introduced to newer arrow-based FRP.
- FRAG, http://www.haskell.org/haskellwiki/Fraga Quake-like game written
in Yampa
- "Genuinely Functional User
Interfaceshttp://haskell.cs.yale.edu/yale/papers/haskellworkshop01/genuinely-functiona..."
to see how user interfaces could be made with arrow-based FRP
The newest FRP approaches are Reactive and Grapefruit, but these don't have
a lot of examples yet.
For Reactive, besides the nice FRP tutorial that was mentioned, you might
want to look at David's Sankel
tutorialshttp://netsuperbrain.com/blog/posts/introducing-reactive-behaviors/
The examples for Grapefruit can be found in the darcs repos as mentioned
here http://www.haskell.org/haskellwiki/Grapefruit
2009/4/10 Joe Fredette
I've seen alot of FRP libraries come up, and I'm always left with the question, "Where the heck are the FRP tutorials?"
I'm talking about the bare-bones, "I've-never-even-touched-this-stuff-before" kind of tutorial. Something that explains the general theory and provides a few simple applications, maybe the start of a bigger one or something to mess around with and actually learn how to use FRP.
The notion seems interesting, and perhaps I just haven't googled hard enough, but I can't really seem to find a good, newbie-level tutorial on it.
Can anyone aim me in the right direction?
Patai Gergely wrote:
Hi everyone,
I'm pleased to announce Elerea, aka "Eventless reactivity", a minimalistic FRP implementation that
- comes with a convenient applicative interface (similar to Reactive) - supports recursive definition of signals - supports signals fed from outside by IO actions - supports dynamism in the signal structure (I think ;) - seems to play nice with resources, especially memory - is based on some unsafePerformIO dark magic (that might easily break depending on many factors) - might have some parallelisation potential - has absolutely no formal foundations, it's just the result of some furious hacking over the last few days!
There are working examples to show off the current capabilities of the library, found in the separate elerea-examples package. Have fun playing with it!
Gergely
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

An other interesting approach to FRP is frtime (in Scheme):
http://www.cs.brown.edu/~sk/Publications/Papers/Published/ck-frtime/
There should be a second paper than this one, I have just forget its reference.
Thu
2009/4/10 Peter Verswyvelen
If you're interested in the history of FRP (which I think isn't too bad) you could read - the book "Haskell School of Expression ", which contains a good introduction to classical FRP. - "The Yampa Arcade" paper, to get introduced to newer arrow-based FRP. - FRAG, a Quake-like game written in Yampa - "
Genuinely Functional User Interfaces
" to see how user interfaces could be made with arrow-based FRP The newest FRP approaches are Reactive and Grapefruit, but these don't have a lot of examples yet. For Reactive, besides the nice FRP tutorial that was mentioned, you might want to look at David's Sankel tutorials The examples for Grapefruit can be found in the darcs repos as mentioned here
2009/4/10 Joe Fredette
I've seen alot of FRP libraries come up, and I'm always left with the question, "Where the heck are the FRP tutorials?"
I'm talking about the bare-bones, "I've-never-even-touched-this-stuff-before" kind of tutorial. Something that explains the general theory and provides a few simple applications, maybe the start of a bigger one or something to mess around with and actually learn how to use FRP.
The notion seems interesting, and perhaps I just haven't googled hard enough, but I can't really seem to find a good, newbie-level tutorial on it.
Can anyone aim me in the right direction?
Patai Gergely wrote:
Hi everyone,
I'm pleased to announce Elerea, aka "Eventless reactivity", a minimalistic FRP implementation that
- comes with a convenient applicative interface (similar to Reactive) - supports recursive definition of signals - supports signals fed from outside by IO actions - supports dynamism in the signal structure (I think ;) - seems to play nice with resources, especially memory - is based on some unsafePerformIO dark magic (that might easily break depending on many factors) - might have some parallelisation potential - has absolutely no formal foundations, it's just the result of some furious hacking over the last few days!
There are working examples to show off the current capabilities of the library, found in the separate elerea-examples package. Have fun playing with it!
Gergely
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

An other interesting approach to FRP is frtime (in Scheme): http://www.cs.brown.edu/~sk/Publications/Papers/Published/ck-frtime/ There should be a second paper than this one, I have just forget its reference. Elerea is a bit similar, since it also maintains the connections between signals explicitly. However, its evaluation is a backward chaining process starting from the top-level signal, only affecting the nodes it depends on transitively. Only stateful signals need to be mutated and reevaluated whenever the top level is sampled (to avoid time and space leaks). Constants, pure functions and function application nodes always stay the same, and they are lazily evaluated. Peripheral bound external signals are also static from the point of view of the evaluator, and it is the responsibility of the environment to update them.
Gergely -- http://www.fastmail.fm - Accessible with your email software or over the web

Any idea how Elerea compares to Grapefruit? It's great to see a lot of
competition in the FRP arena, but I hope in the end this results in a really
usable and scalable FRP system for Haskell :-)
2009/4/11 Patai Gergely
An other interesting approach to FRP is frtime (in Scheme): http://www.cs.brown.edu/~sk/Publications/Papers/Published/ck-frtime/ There should be a second paper than this one, I have just forget its reference. Elerea is a bit similar, since it also maintains the connections between signals explicitly. However, its evaluation is a backward chaining process starting from the top-level signal, only affecting the nodes it depends on transitively. Only stateful signals need to be mutated and reevaluated whenever the top level is sampled (to avoid time and space leaks). Constants, pure functions and function application nodes always stay the same, and they are lazily evaluated. Peripheral bound external signals are also static from the point of view of the evaluator, and it is the responsibility of the environment to update them.
Gergely
-- http://www.fastmail.fm - Accessible with your email software or over the web
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Any idea how Elerea compares to Grapefruit? It's great to see a lot of competition in the FRP arena, but I hope in the end this results in a really usable and scalable FRP system for Haskell :-) I think Wolfgang can judge this better, because I don't really know the innards of Grapefruit, while the Elerea core is extremely simple, so anyone can easily see how it works. By the way, I was told that Haddock will be run in a day or two, so the online docs will appear soon.
From the user's point of view, Elerea is probably closest to Reactive, except that everything including reactivity is expressed with continuous functions. There are no reactive elements with side-effects (at least conceptually), unlike the circuits of Grapefruit. Also, Elerea is essentially pull-based. In general, we can say there's a lot less going on in it than in Reactive or Grapefruit.
Gergely -- http://www.fastmail.fm - The way an email service should be

Am Samstag, 11. April 2009 16:57 schrieb Patai Gergely:
Any idea how Elerea compares to Grapefruit? It's great to see a lot of competition in the FRP arena, but I hope in the end this results in a really usable and scalable FRP system for Haskell :-)
I think Wolfgang can judge this better, because I don't really know the innards of Grapefruit,
I didn’t have a deep look at Elerea so far but looked at the Haddock docs. If I understand correctly, Elerea’s signals construct mutable variables for caching their current values when they are evaluated. This happens using unsafePerformIO. Grapefruit’s DSignals and SSignals use a purely functional data structure to represent several possible future values of whom only the ones which are actually occuring are evaluated. This data structure is created using unsafeInterleaveIO. So Elerea seems to have to take special care to not break referential transparency. Grapefruit has to take care only with CSignals since only these are using unsafePerformIO internally. Since Grapefruit uses ordinary expression evaluation for evaluating signal values, it can probably make use of certain compiler optimizations. Elerea’s evaluation seems to be driven by hand-coded IO actions so that use of such compiler optimizations is certainly not possible. Both Grapefruit and Elerea probably need a way to fix a signal to a specific starting time. Grapefruit does this using era type parameters. Elerea doesn’t seem to do anything about this at the moment. Patai, could you please correct me where I’m wrong and clarify the points which are still unclear to me? Best wishes, Wolfgang

I uploaded a new version of the library. The biggest change is that transfer functions and latchers have no delay by default any more. Additionally, this version has a unique and rather shady feature: it inserts delays whenever it encounters a cyclic dependency involving stateful signals. If you don't like this behaviour, you can prevent it by adding the delays yourself wherever you want them. The change is quite spectacular if you compare the current behaviour of the breakout example with the previous one. Gergely -- http://www.fastmail.fm - Same, same, but different...

That's rather amazing. Is this the first FRP lib that does this?
2009/4/12 Patai Gergely
I uploaded a new version of the library. The biggest change is that transfer functions and latchers have no delay by default any more. Additionally, this version has a unique and rather shady feature: it inserts delays whenever it encounters a cyclic dependency involving stateful signals. If you don't like this behaviour, you can prevent it by adding the delays yourself wherever you want them.
The change is quite spectacular if you compare the current behaviour of the breakout example with the previous one.
Gergely
-- http://www.fastmail.fm - Same, same, but different...
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

seems something is wrong with either your time sampling or performance,
because I get rather jerky movement when playing the breakout or the chase
demo. did you notice that too?
at what rate are you sampling? if you are sampling at a lower rate as the
video vsync, you might want to perform interpolation between two "logical
frames" as jules bean did in his own reactive planets demo.
On Sun, Apr 12, 2009 at 11:24 PM, Peter Verswyvelen
That's rather amazing. Is this the first FRP lib that does this?
2009/4/12 Patai Gergely
I uploaded a new version of the library. The biggest change is that
transfer functions and latchers have no delay by default any more. Additionally, this version has a unique and rather shady feature: it inserts delays whenever it encounters a cyclic dependency involving stateful signals. If you don't like this behaviour, you can prevent it by adding the delays yourself wherever you want them.
The change is quite spectacular if you compare the current behaviour of the breakout example with the previous one.
Gergely
-- http://www.fastmail.fm - Same, same, but different...
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

seems something is wrong with either your time sampling or performance, because I get rather jerky movement when playing the breakout or the chase demo. did you notice that too? No, it was always smooth for me. But I only tried it on 64-bit machines and only with ghc-6.10.2.
at what rate are you sampling? if you are sampling at a lower rate as the video vsync, you might want to perform interpolation between two "logical frames" as jules bean did in his own reactive planets demo. The breakout example has a zero threadDelay in its main loop, because that apparently causes a break with the length of a scheduler tick, essentially giving an 50 Hz sampling rate. The chase example has no delay added at all, and it goes over 2500 fps for me.
-- http://www.fastmail.fm - Accessible with your email software or over the web

Interesting. I'm testing it on Window though. You're using Linux? Maybe the
scheduling is different.
2009/4/13 Patai Gergely
seems something is wrong with either your time sampling or performance, because I get rather jerky movement when playing the breakout or the chase demo. did you notice that too? No, it was always smooth for me. But I only tried it on 64-bit machines and only with ghc-6.10.2.
at what rate are you sampling? if you are sampling at a lower rate as the video vsync, you might want to perform interpolation between two "logical frames" as jules bean did in his own reactive planets demo. The breakout example has a zero threadDelay in its main loop, because that apparently causes a break with the length of a scheduler tick, essentially giving an 50 Hz sampling rate. The chase example has no delay added at all, and it goes over 2500 fps for me.
-- http://www.fastmail.fm - Accessible with your email software or over the web
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Interesting. I'm testing it on Window though. You're using Linux? Maybe the scheduling is different. I'm on Linux, yes. But I've no idea what could cause such a difference, since both the library and the examples are single threaded. GLFW might also be the culprit, since that's where the frame duration comes from. It might be a good idea to print the value of t in the readInput function. Under Linux, it's pretty uniform for both programs and always equal to the reciprocal of the frame rate as expected.
-- http://www.fastmail.fm - Faster than the air-speed velocity of an unladen european swallow

Interesting. I'm testing it on Window though. You're using Linux? Maybe the scheduling is different. Now I tried it on Windows in VirtualBox, and it still looks quite smooth to me (except that hardware acceleration doesn't seem to work properly through virtualisation, but it's okay as long as I keep the window small enough). However, unlike the Linux version, it does max out the CPU, so the trick with threadDelay 0 doesn't work. Apparently, I'll have to find a real Windows box somewhere, because I couldn't reproduce the jerkiness you're talking about.
Gergely -- http://www.fastmail.fm - The way an email service should be

I will test it on a couple of machines, desktops and laptops. I think the
problem was my laptop going into power safe mode or something, since
sometimes it runs smooth, sometimes it doesn't. This could indeed be a
problem with GLFW's time attribute on windows (which uses the CPU tick
frequency which gets trottled to safe energy), although I believe the
Windows API should take care of this. I haven't seen this yet with my own
frp experiments, which seem to run smooth all the time, but again I will do
more testing.
I have been thinking about your approach, using mutable variables to hold a
signal's sample. This is exactly what I did with on old C# prototype, and it
worked out nicely. If you take a look what Yampa does: it hides signals and
only exposes signal functions. But that means that the FRP engine itself
could indeed use mutable variables for its signals, as long as during the
evaluation of the circuit at time T no side effects should occur; the side
effects should take place when the simulation is advanced to T+dT, which is
done after the circuit is fully evaluated at time T. If you want higher
order integration, you would need to keep a buffer of more samples, but it
would still work. So I think you approach is a very good pragmatic one! I'm
only a bit worried about your automatic insertion of delays; this might
break referential transparency at time T, since it depends on the order in
which the nodes in the circuit are evaluated no? The latter could be
problematic when doing evaluation in parallel on multiple cores I guess.
2009/4/14 Patai Gergely
Interesting. I'm testing it on Window though. You're using Linux? Maybe the scheduling is different. Now I tried it on Windows in VirtualBox, and it still looks quite smooth to me (except that hardware acceleration doesn't seem to work properly through virtualisation, but it's okay as long as I keep the window small enough). However, unlike the Linux version, it does max out the CPU, so the trick with threadDelay 0 doesn't work. Apparently, I'll have to find a real Windows box somewhere, because I couldn't reproduce the jerkiness you're talking about.
Gergely
-- http://www.fastmail.fm - The way an email service should be
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Ouch, I did not see Wolfgang's email nor your reply, sorry for the noise
(which I'm doing again with this email ;-)
On Tue, Apr 14, 2009 at 11:01 PM, Peter Verswyvelen
I will test it on a couple of machines, desktops and laptops. I think the problem was my laptop going into power safe mode or something, since sometimes it runs smooth, sometimes it doesn't. This could indeed be a problem with GLFW's time attribute on windows (which uses the CPU tick frequency which gets trottled to safe energy), although I believe the Windows API should take care of this. I haven't seen this yet with my own frp experiments, which seem to run smooth all the time, but again I will do more testing. I have been thinking about your approach, using mutable variables to hold a signal's sample. This is exactly what I did with on old C# prototype, and it worked out nicely. If you take a look what Yampa does: it hides signals and only exposes signal functions. But that means that the FRP engine itself could indeed use mutable variables for its signals, as long as during the evaluation of the circuit at time T no side effects should occur; the side effects should take place when the simulation is advanced to T+dT, which is done after the circuit is fully evaluated at time T. If you want higher order integration, you would need to keep a buffer of more samples, but it would still work. So I think you approach is a very good pragmatic one! I'm only a bit worried about your automatic insertion of delays; this might break referential transparency at time T, since it depends on the order in which the nodes in the circuit are evaluated no? The latter could be problematic when doing evaluation in parallel on multiple cores I guess.
2009/4/14 Patai Gergely
the scheduling is different. Now I tried it on Windows in VirtualBox, and it still looks quite smooth to me (except that hardware acceleration doesn't seem to work properly
Interesting. I'm testing it on Window though. You're using Linux? Maybe through virtualisation, but it's okay as long as I keep the window small enough). However, unlike the Linux version, it does max out the CPU, so the trick with threadDelay 0 doesn't work. Apparently, I'll have to find a real Windows box somewhere, because I couldn't reproduce the jerkiness you're talking about.
Gergely
-- http://www.fastmail.fm - The way an email service should be
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I don't think using dirty tricks to implement FRP deserves flak, at all, from my POV, it sounds like complaining that the IO monad is implemented using C... meaning that if you're that close to bare thunks, you have every right to use any means necessary to make them behave properly. Dirtiness is not the problem, but the fact that it can leak out at the present moment. I want guarantees to exclude the possibility of undesired behaviour on the user side. Am I right thinking that the NOINLINE pragma on unsafeDupablePerformIO prevents the problem of multiple evaluation discussed yesterday? Or should I add NOINLINE to primitives in Elerea.Internal too? If that guaranteed sharing, it would certainly solve most of the problems we talked about. Apart from that, I'm still not sure that latching works the way intended all the time, but the fact that the breakout example works is an indication that at least it's not hopelessly broken.
Gergely -- http://www.fastmail.fm - Access all of your messages and folders wherever you are

"Patai Gergely"
Am I right thinking that the NOINLINE pragma on unsafeDupablePerformIO prevents the problem of multiple evaluation discussed yesterday?
From what I know and experienced, yes. Each individual unsafePerformIO only ever evaluates its action once, and if they are prevented from duplicating, things should work out as intended. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

Well, a breakout game does *not* work (yet) in most other FRP
implementations except Yampa, which do have firm theoretical foundations :-)
2009/4/15 Patai Gergely
I don't think using dirty tricks to implement FRP deserves flak, at all, from my POV, it sounds like complaining that the IO monad is implemented using C... meaning that if you're that close to bare thunks, you have every right to use any means necessary to make them behave properly. Dirtiness is not the problem, but the fact that it can leak out at the present moment. I want guarantees to exclude the possibility of undesired behaviour on the user side. Am I right thinking that the NOINLINE pragma on unsafeDupablePerformIO prevents the problem of multiple evaluation discussed yesterday? Or should I add NOINLINE to primitives in Elerea.Internal too? If that guaranteed sharing, it would certainly solve most of the problems we talked about. Apart from that, I'm still not sure that latching works the way intended all the time, but the fact that the breakout example works is an indication that at least it's not hopelessly broken.
Gergely
-- http://www.fastmail.fm - Access all of your messages and folders wherever you are
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

but the fact that the breakout example works is an indication that at least it's not hopelessly broken. Well, a breakout game does *not* work (yet) in most other FRP implementations except Yampa, which do have firm theoretical foundations :-)
While certainly more entertaining, the problem looks similar enough to the NLift example (a lift serving requests on n floors[0]) in FunWorlds (the 2002 OpenGL version[1], not the 2001 VRML version[2]), chosen to test some expressiveness aspects of the language: - a dynamically updated collection (requests in NLift, bricks in breakout) - an object moving in response to user input (lift/paddle+ball) - collection and object reacting to each other's relative positions (lift at floor levels/paddle ball brick collisions) In NLift, user input (keyboard) adds to the requests collection, and the lift reacts to the request collection and its own status, while in breakout, user input (mouse) directly controls the paddle, to which the ball reacts. The lift stopping at a floor directly removes a request there, while breakout bricks disappear when hit by the additional ball. In NLift, collisions and movement are one-dimensional, while breakout is two-dimensional. On the other hand, I hadn't got round to cleaning up the interface, let alone firming the theoretical foundations, so perhaps this isn't an exception to your rule?-) But I thought I'd mention it on the topic of "other FRP libraries", with variations of approach/concepts. Claus [0] http://community.haskell.org/~claus/FunWorlds/NLift.hs [1] http://community.haskell.org/~claus/FunWorlds/ [2] http://community.haskell.org/~claus/FunWorlds/VRML/ FunWorlds/OpenGL in brief: - a behaviour is a description of an experiment - a behaviour can be sampled (performing the experiment), yielding a current value and a residual behaviour (the latter replaces the original behaviour) - the results of measurements can be broadcast and observed via behavioural channels (a channel observer simply behaves as the channel source behaviour, with a slight delay) That's it! The is no special role for time at all. One can establish local clocks, one can even broadcast their ticking behaviours. But one cannot take an arbitrary absolute time and ask for the value of a behaviour at that time (other than actually running that behaviour forward or backward from "now"). Also, there is a natural distinction between describing and running a behaviour, with the ability to refer to either the description or to sample outcomes. And having the same behaviour description on both sides of an event in a stepper/until does not mean that nothing changes at the step: the second copy doesn't continue where the first left off, but starts from its own beginning (with no special tricks to achieve this). There are no separate events, and delays enter via behavioural channels. Well, there were lots of negatives as well (eg FunWorlds was an "engine-exposed" workbench rather than a user-directed library), but I thought I'd try to get you interested first!-) I'd love to have funding to work out the details and clean up/modernize the interface, but without funding, it'll just have to wait until I get round to it (or one of the newer FRP libraries renders it superfluous..). If you try the examples, you'll notice that some of them run too fast on modern machines (because they weren't tied to an external clock), so you'd have to slow them down (eg, Surface and Torus, in addition to the standard rotate/scale controls, also react to 't'/'T' for scaling time) but they are are still fun to watch in their naivete (try Boids for simplicity, Flock2 for chaos - you'll need to scale it 's'/'S').

I think it would be nice if we could make a "reactive benchmark" or
something: some tiny examples that capture the essence of reactive systems,
and a way to compare each solution's pros and cons.
For example the "plugging a space leak with an arrow" papers reduces the
recursive signal problem to
e = integral 1 e
Maybe the Nlift problem is a good example for dynamic collections, but I
guess we'll need more examples.
The reason why I'm talking about examples and not semantics is because the
latter seems to be pretty hard to get right for FRP?
On Wed, Apr 15, 2009 at 6:39 PM, Claus Reinke
but the fact that the breakout example works is an indication that at
least it's not hopelessly broken.
Well, a breakout game does *not* work (yet) in most other FRP implementations except Yampa, which do have firm theoretical foundations :-)
While certainly more entertaining, the problem looks similar enough to the NLift example (a lift serving requests on n floors[0]) in FunWorlds (the 2002 OpenGL version[1], not the 2001 VRML version[2]), chosen to test some expressiveness aspects of the language:
- a dynamically updated collection (requests in NLift, bricks in breakout) - an object moving in response to user input (lift/paddle+ball) - collection and object reacting to each other's relative positions (lift at floor levels/paddle ball brick collisions)
In NLift, user input (keyboard) adds to the requests collection, and the lift reacts to the request collection and its own status, while in breakout, user input (mouse) directly controls the paddle, to which the ball reacts. The lift stopping at a floor directly removes a request there, while breakout bricks disappear when hit by the additional ball. In NLift, collisions and movement are one-dimensional, while breakout is two-dimensional.
On the other hand, I hadn't got round to cleaning up the interface, let alone firming the theoretical foundations, so perhaps this isn't an exception to your rule?-) But I thought I'd mention it on the topic of "other FRP libraries", with variations of approach/concepts.
Claus
[0] http://community.haskell.org/~claus/FunWorlds/NLift.hs [1] http://community.haskell.org/~claus/FunWorlds/ [2] http://community.haskell.org/~claus/FunWorlds/VRML/
FunWorlds/OpenGL in brief:
- a behaviour is a description of an experiment
- a behaviour can be sampled (performing the experiment), yielding a current value and a residual behaviour (the latter replaces the original behaviour)
- the results of measurements can be broadcast and observed via behavioural channels (a channel observer simply behaves as the channel source behaviour, with a slight delay)
That's it! The is no special role for time at all. One can establish local clocks, one can even broadcast their ticking behaviours. But one cannot take an arbitrary absolute time and ask for the value of a behaviour at that time (other than actually running that behaviour forward or backward from "now").
Also, there is a natural distinction between describing and running a behaviour, with the ability to refer to either the description or to sample outcomes. And having the same behaviour description on both sides of an event in a stepper/until does not mean that nothing changes at the step: the second copy doesn't continue where the first left off, but starts from its own beginning (with no special tricks to achieve this). There are no separate events, and delays enter via behavioural channels.
Well, there were lots of negatives as well (eg FunWorlds was an "engine-exposed" workbench rather than a user-directed library), but I thought I'd try to get you interested first!-) I'd love to have funding to work out the details and clean up/modernize the interface, but without funding, it'll just have to wait until I get round to it (or one of the newer FRP libraries renders it superfluous..).
If you try the examples, you'll notice that some of them run too fast on modern machines (because they weren't tied to an external clock), so you'd have to slow them down (eg, Surface and Torus, in addition to the standard rotate/scale controls, also react to 't'/'T' for scaling time) but they are are still fun to watch in their naivete (try Boids for simplicity, Flock2 for chaos - you'll need to scale it 's'/'S').

Peter Verswyvelen
The reason why I'm talking about examples and not semantics is because the latter seems to be pretty hard to get right for FRP?
There's a difference between those two? I've heard much, but never anyone complaining about specifications overlapping in a compatible way. -- (c) this sig last receiving data processing entity. Inspect headers for copyright history. All rights reserved. Copying, hiring, renting, performance and/or quoting of this signature prohibited.

On the other hand, breaking referential transparency in the external interface is a very bad idea, in my opinion. Actually, this means that the library user would have to turn certain compiler optimizations off to get the intended behavior. However, in practice you can compile Elerea with -O2 without ill effects. In fact, that's what happens if you install it with cabal.
Just have a look at the Haddock docs of unsafePerformIO. Yes, I did that too, and came up with the following checklist:
- the order of side effects doesn't matter much, since the resulting networks are equivalent if we don't rely on the automatic delay feature (applicative optimisations can be different, but still with the same net effect) - unsafePerformIO is apparently never inlined, i.e. each instance is executed once, so sharing works as desired - let-floating is no problem, because all instances of unsafePerformIO rely on surrounding function arguments - CSE is no problem either, it even helps if it's performed (and it is with optimisations turned on), since it results in smaller equivalent networks I think we can expect it to be fairly well-behaving, because the 'side effect' of Elerea primitives is basically the same as that of pure values in general: upon evaluation a value is created in the memory and we get a reference to it. We only have an extra constraint for the compiler: never duplicate these values. Merging identical ones is okay, and in fact desirable. The following code demonstrates this if you compile it with and without optimisations: import Control.Applicative import Control.Monad import FRP.Elerea import System.IO.Unsafe cint a b = unsafePerformIO (putStrLn "!") `seq` transfer 0 (\dt x x0 -> x0+x*dt) b mysig = (latcher 0 (b >@ 0.3) (const (cint a b) <$> cint a b)) + (cint a b) + (cint a b) + a where a = pure 4 b = stateful 0 (+) main = replicateM 10 (superstep mysig 0.1) >>= print I'd like to see an example where optimisation does make a difference, because I'm still unsure about the consequences of 'unsafeness'. Gergely -- http://www.fastmail.fm - Or how I learned to stop worrying and love email again

There's no guarantee about unsafePerformIO not being inlined, that's
just how ghc treats it.
2009/4/16 Patai Gergely
On the other hand, breaking referential transparency in the external interface is a very bad idea, in my opinion. Actually, this means that the library user would have to turn certain compiler optimizations off to get the intended behavior. However, in practice you can compile Elerea with -O2 without ill effects. In fact, that's what happens if you install it with cabal.
Just have a look at the Haddock docs of unsafePerformIO. Yes, I did that too, and came up with the following checklist:
- the order of side effects doesn't matter much, since the resulting networks are equivalent if we don't rely on the automatic delay feature (applicative optimisations can be different, but still with the same net effect) - unsafePerformIO is apparently never inlined, i.e. each instance is executed once, so sharing works as desired - let-floating is no problem, because all instances of unsafePerformIO rely on surrounding function arguments - CSE is no problem either, it even helps if it's performed (and it is with optimisations turned on), since it results in smaller equivalent networks
I think we can expect it to be fairly well-behaving, because the 'side effect' of Elerea primitives is basically the same as that of pure values in general: upon evaluation a value is created in the memory and we get a reference to it. We only have an extra constraint for the compiler: never duplicate these values. Merging identical ones is okay, and in fact desirable. The following code demonstrates this if you compile it with and without optimisations:
import Control.Applicative import Control.Monad import FRP.Elerea import System.IO.Unsafe
cint a b = unsafePerformIO (putStrLn "!") `seq` transfer 0 (\dt x x0 -> x0+x*dt) b
mysig = (latcher 0 (b >@ 0.3) (const (cint a b) <$> cint a b)) + (cint a b) + (cint a b) + a where a = pure 4 b = stateful 0 (+)
main = replicateM 10 (superstep mysig 0.1) >>= print
I'd like to see an example where optimisation does make a difference, because I'm still unsure about the consequences of 'unsafeness'.
Gergely
-- http://www.fastmail.fm - Or how I learned to stop worrying and love email again
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Well, the documentation says:
Use {-# NOINLINE foo #-} as a pragma on any function foo that calls
unsafePerformIOfile:///C:/app/ghc-6.10.1/doc/libraries/base/System-IO-Unsafe.html#v%3Aunsaf....
If the call is inlined, the I/O may be performed more than once.
So you claim this does not prevent GHC to inline it anyway? That feels like
a bug then, both in the documentation and NOINLINE
On Thu, Apr 16, 2009 at 10:18 AM, Lennart Augustsson wrote: There's no guarantee about unsafePerformIO not being inlined, that's
just how ghc treats it. 2009/4/16 Patai Gergely On the other hand, breaking referential transparency in the
external interface is a very bad idea, in my opinion. Actually,
this means that the library user would have to turn certain
compiler optimizations off to get the intended behavior.
However, in practice you can compile Elerea with -O2 without ill
effects. In fact, that's what happens if you install it with cabal. Just have a look at the Haddock docs of unsafePerformIO.
Yes, I did that too, and came up with the following checklist: - the order of side effects doesn't matter much, since the resulting
networks are equivalent if we don't rely on the automatic delay feature
(applicative optimisations can be different, but still with the same net
effect)
- unsafePerformIO is apparently never inlined, i.e. each instance is
executed once, so sharing works as desired
- let-floating is no problem, because all instances of unsafePerformIO
rely on surrounding function arguments
- CSE is no problem either, it even helps if it's performed (and it is
with optimisations turned on), since it results in smaller equivalent
networks I think we can expect it to be fairly well-behaving, because the 'side
effect' of Elerea primitives is basically the same as that of pure
values in general: upon evaluation a value is created in the memory and
we get a reference to it. We only have an extra constraint for the
compiler: never duplicate these values. Merging identical ones is okay,
and in fact desirable. The following code demonstrates this if you
compile it with and without optimisations: import Control.Applicative
import Control.Monad
import FRP.Elerea
import System.IO.Unsafe cint a b = unsafePerformIO (putStrLn "!") `seq`
transfer 0 (\dt x x0 -> x0+x*dt) b mysig = (latcher 0 (b >@ 0.3) (const (cint a b) <$> cint a b)) +
(cint a b) + (cint a b) + a
where a = pure 4
b = stateful 0 (+) main = replicateM 10 (superstep mysig 0.1) >>= print I'd like to see an example where optimisation does make a difference,
because I'm still unsure about the consequences of 'unsafeness'. Gergely --
http://www.fastmail.fm - Or how I learned to stop worrying and
love email again _______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe _______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

Hello Peter, Thursday, April 16, 2009, 12:29:41 PM, you wrote: Lennart (and Patai) said about unsafePerformIO, you - about NOINLINE
Well, the documentation says: Use {-# NOINLINE foo #-} as a pragma on any function foo that calls unsafePerformIO. If the call is inlined, the I/O may be performed more than once.
So you claim this does not prevent GHC to inline it anyway? That feels like a bug then, both in the documentation and NOINLINE
On Thu, Apr 16, 2009 at 10:18 AM, Lennart Augustsson
wrote: There's no guarantee about unsafePerformIO not being inlined, that's just how ghc treats it.
2009/4/16 Patai Gergely
: On the other hand, breaking referential transparency in the external interface is a very bad idea, in my opinion. Actually, this means that the library user would have to turn certain compiler optimizations off to get the intended behavior. However, in practice you can compile Elerea with -O2 without ill effects. In fact, that's what happens if you install it with cabal.
Just have a look at the Haddock docs of unsafePerformIO. Yes, I did that too, and came up with the following checklist:
- the order of side effects doesn't matter much, since the resulting networks are equivalent if we don't rely on the automatic delay feature (applicative optimisations can be different, but still with the same net effect) - unsafePerformIO is apparently never inlined, i.e. each instance is executed once, so sharing works as desired - let-floating is no problem, because all instances of unsafePerformIO rely on surrounding function arguments - CSE is no problem either, it even helps if it's performed (and it is with optimisations turned on), since it results in smaller equivalent networks
I think we can expect it to be fairly well-behaving, because the 'side effect' of Elerea primitives is basically the same as that of pure values in general: upon evaluation a value is created in the memory and we get a reference to it. We only have an extra constraint for the compiler: never duplicate these values. Merging identical ones is okay, and in fact desirable. The following code demonstrates this if you compile it with and without optimisations:
import Control.Applicative import Control.Monad import FRP.Elerea import System.IO.Unsafe
cint a b = unsafePerformIO (putStrLn "!") `seq` transfer 0 (\dt x x0 -> x0+x*dt) b
mysig = (latcher 0 (b >@ 0.3) (const (cint a b) <$> cint a b)) + (cint a b) + (cint a b) + a where a = pure 4 b = stateful 0 (+)
main = replicateM 10 (superstep mysig 0.1) >>= print
I'd like to see an example where optimisation does make a difference, because I'm still unsure about the consequences of 'unsafeness'.
Gergely
-- http://www.fastmail.fm - Or how I learned to stop worrying and love email again
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

With NOINLINE you should be safe, but you never mentioned that originally. :)
On Thu, Apr 16, 2009 at 10:29 AM, Peter Verswyvelen
Well, the documentation says: Use {-# NOINLINE foo #-} as a pragma on any function foo that calls unsafePerformIO. If the call is inlined, the I/O may be performed more than once. So you claim this does not prevent GHC to inline it anyway? That feels like a bug then, both in the documentation and NOINLINE On Thu, Apr 16, 2009 at 10:18 AM, Lennart Augustsson
wrote: There's no guarantee about unsafePerformIO not being inlined, that's just how ghc treats it.
2009/4/16 Patai Gergely
: On the other hand, breaking referential transparency in the external interface is a very bad idea, in my opinion. Actually, this means that the library user would have to turn certain compiler optimizations off to get the intended behavior. However, in practice you can compile Elerea with -O2 without ill effects. In fact, that's what happens if you install it with cabal.
Just have a look at the Haddock docs of unsafePerformIO. Yes, I did that too, and came up with the following checklist:
- the order of side effects doesn't matter much, since the resulting networks are equivalent if we don't rely on the automatic delay feature (applicative optimisations can be different, but still with the same net effect) - unsafePerformIO is apparently never inlined, i.e. each instance is executed once, so sharing works as desired - let-floating is no problem, because all instances of unsafePerformIO rely on surrounding function arguments - CSE is no problem either, it even helps if it's performed (and it is with optimisations turned on), since it results in smaller equivalent networks
I think we can expect it to be fairly well-behaving, because the 'side effect' of Elerea primitives is basically the same as that of pure values in general: upon evaluation a value is created in the memory and we get a reference to it. We only have an extra constraint for the compiler: never duplicate these values. Merging identical ones is okay, and in fact desirable. The following code demonstrates this if you compile it with and without optimisations:
import Control.Applicative import Control.Monad import FRP.Elerea import System.IO.Unsafe
cint a b = unsafePerformIO (putStrLn "!") `seq` transfer 0 (\dt x x0 -> x0+x*dt) b
mysig = (latcher 0 (b >@ 0.3) (const (cint a b) <$> cint a b)) + (cint a b) + (cint a b) + a where a = pure 4 b = stateful 0 (+)
main = replicateM 10 (superstep mysig 0.1) >>= print
I'd like to see an example where optimisation does make a difference, because I'm still unsure about the consequences of 'unsafeness'.
Gergely
-- http://www.fastmail.fm - Or how I learned to stop worrying and love email again
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Oh I see. Gergely did mention it in a previous email:
Am I right thinking that the NOINLINE pragma on unsafeDupablePerformIO
prevents the problem of
multiple evaluation discussed yesterday? Or should I add NOINLINE to
primitives in Elerea.Internal too? If that guaranteed sharing, it would
So I got confused, mixing up several emails here. Mea culpa.
On Thu, Apr 16, 2009 at 12:40 PM, Lennart Augustsson wrote: With NOINLINE you should be safe, but you never mentioned that originally.
:) Well, the documentation says:
Use {-# NOINLINE foo #-} as a pragma on any function foo that
calls unsafePerformIO. If the call is inlined, the I/O may be performed
more
than once.
So you claim this does not prevent GHC to inline it anyway? That feels On Thu, Apr 16, 2009 at 10:29 AM, Peter Verswyvelen a bug then, both in the documentation and NOINLINE
On Thu, Apr 16, 2009 at 10:18 AM, Lennart Augustsson
There's no guarantee about unsafePerformIO not being inlined, that's
just how ghc treats it. 2009/4/16 Patai Gergely On the other hand, breaking referential transparency in the
external interface is a very bad idea, in my opinion. Actually,
this means that the library user would have to turn certain
compiler optimizations off to get the intended behavior.
However, in practice you can compile Elerea with -O2 without ill
effects. In fact, that's what happens if you install it with cabal. Just have a look at the Haddock docs of unsafePerformIO.
Yes, I did that too, and came up with the following checklist: - the order of side effects doesn't matter much, since the resulting
networks are equivalent if we don't rely on the automatic delay feature (applicative optimisations can be different, but still with the same
net
effect)
- unsafePerformIO is apparently never inlined, i.e. each instance is
executed once, so sharing works as desired
- let-floating is no problem, because all instances of unsafePerformIO
rely on surrounding function arguments
- CSE is no problem either, it even helps if it's performed (and it is
with optimisations turned on), since it results in smaller equivalent
networks I think we can expect it to be fairly well-behaving, because the 'side
effect' of Elerea primitives is basically the same as that of pure
values in general: upon evaluation a value is created in the memory
and
we get a reference to it. We only have an extra constraint for the
compiler: never duplicate these values. Merging identical ones is
okay,
and in fact desirable. The following code demonstrates this if you
compile it with and without optimisations: import Control.Applicative
import Control.Monad
import FRP.Elerea
import System.IO.Unsafe cint a b = unsafePerformIO (putStrLn "!") `seq`
transfer 0 (\dt x x0 -> x0+x*dt) b mysig = (latcher 0 (b >@ 0.3) (const (cint a b) <$> cint a b)) +
(cint a b) + (cint a b) + a
where a = pure 4
b = stateful 0 (+) main = replicateM 10 (superstep mysig 0.1) >>= print I'd like to see an example where optimisation does make a difference,
because I'm still unsure about the consequences of 'unsafeness'. Gergely --
http://www.fastmail.fm - Or how I learned to stop worrying and
love email again _______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe _______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

But expressions that use unsafePerformIO might get inlined. That's true, but given the way this interface is used, this doesn't seem to be an issue, since there are typically no unsafePerformIO's hidden deep inside an expression without a chain of Elerea primitives leading to it.
What about evaluation time? If I remember correctly, the values of signals depend on the time when the signal expressions are evaluated. So evaluating them multiple times might lead to different behavior. Is this correct? It is. However, there is really only one construct that needs extra care here: the latcher. All the others create top-level nodes that get evaluated just once during the first attempt to sample the output of the network. Therefore, duplication and merging of identical expressions only affects the performance unless they are hidden in the input signal of a latcher.
But in contrast to ordinary values, you have an internal mutable state, don’t you? Yes. But the access patterns are quite restricted, which makes things a bit easier to reason about. I can refer to the Hume language again: it stores state in mutable variables, but restricts each of them to have exactly one consumer and one producer, and the movement of data is also strictly regulated. Elerea lifts some restrictions of Hume: a variable can have many consumers, and nodes have no obligatory delays, so a piece of data can travel through the whole network in a single superstep. It still seems to be tractable, but no guarantees yet.
In the end it boils down to getting the latcher to evaluate the inner signal at the first moment too. This is easy to do with just top-level latchers (it takes a single seq at the right place), but latchers embedded in latchers can only be guaranteed to work if the inner signal is not just evaluated but sampled too (an extra call to signalValue). Which is probably what should be done anyway. If all these precautions are taken, the resulting networks should be equivalent with respect to superstep, which is the only way to look at their output. Nevertheless, I'm trying to find ways to create the network in a safer way without sacrificing flexibility on the user end. In the meantime, I'm silently analysing the consequences of the current design to see where and how it can break. Gergely -- http://www.fastmail.fm - One of many happy users: http://www.fastmail.fm/docs/quotes.html

Am Dienstag, 21. April 2009 17:18 schrieb Patai Gergely:
What about evaluation time? If I remember correctly, the values of signals depend on the time when the signal expressions are evaluated. So evaluating them multiple times might lead to different behavior. Is this correct?
It is. However, there is really only one construct that needs extra care here: the latcher. All the others create top-level nodes that get evaluated just once during the first attempt to sample the output of the network. Therefore, duplication and merging of identical expressions only affects the performance unless they are hidden in the input signal of a latcher.
But isn’t the latter a fundamental problem? To make things a bit more concrete, look at the following code snippet:
time :: Signal DTime time = stateful 0 (+)
timeSwitching :: Signal Bool -> Signal DTime timeSwitching control = latcher time control (pure time)
If the time signal is evaluated only once then for any signal c, timeSwitching c should be equivalent to time. But what if I replace time by stateful 0 (+) in the definition of timeSwitching? If stateful 0 (+) is evaluated everytime a switch occurs then timeSwitching would always switch back to time 0. So one first has to answer the question what the intended semantics should be. Should signals start at the beginning or should they start every time they are switched into? Implementing the first semantics is difficult since the system would have to know what signals will be used later. I think this is impossible in general because of undecidability issues. (Grapefruit’s approach is to force the user to specify what signals are used later.) Implementing the second semantics would require a single signal having possibly different values when started at different times. This in turn would disallow caching of signal values in mutable variables. Maybe I just misunderstood something, so please correct me if I’m wrong. Best wishes, Wolfgang

network. Therefore, duplication and merging of identical expressions only affects the performance unless they are hidden in the input signal of a latcher.
But isn't the latter a fundamental problem? Of course it is, but I said afterwards that this can be resolved by sampling 'more thoroughly'.
If the time signal is evaluated only once then for any signal c, timeSwitching c should be equivalent to time. But what if I replace time by stateful 0 (+) in the definition of timeSwitching? If stateful 0 (+) is evaluated everytime a switch occurs then timeSwitching would always switch back to time 0. The value stored in a constant signal is supposed to be evaluated exactly once, therefore in this case the expected behaviour is that timeSwitching is equivalent to time. The meaning of 'pure sig' is a signal that refers to 'sig' at every point. At the present moment, if you express the same by lifting the signal constructor directly, it will also be evaluated exactly once, but only when the control signal is true for the first time. This is obviously wrong, but it can be cured by forcing evaluation at the beginning.
This also means that if you want to restart a signal without external dependencies using a latcher, you have to inject some bogus dependency to prevent memoisation. If the new signal depends on some others, latching should behave intuitively.
So one first has to answer the question what the intended semantics should be. Should signals start at the beginning or should they start every time they are switched into? Clearing up the semantics is certainly on my todo list. :)
Implementing the first semantics is difficult since the system would have to know what signals will be used later. I think this is impossible in general because of undecidability issues. (Grapefruit's approach is to force the user to specify what signals are used later.) Do you have a compact use case that demonstrates this problem?
Implementing the second semantics would require a single signal having possibly different values when started at different times. This in turn would disallow caching of signal values in mutable variables. Or at least it would require deep copying some initial snapshot at every restart. But this only applies to completely self-contained signals, since anything that depends on the outer world cannot be restarted by definition.
Gergely -- http://www.fastmail.fm - Faster than the air-speed velocity of an unladen european swallow

Am Mittwoch, 22. April 2009 16:00 schrieb Patai Gergely:
This also means that if you want to restart a signal without external dependencies using a latcher, you have to inject some bogus dependency to prevent memoisation. If the new signal depends on some others, latching should behave intuitively.
So does this mean that whether a signal is started at the beginning or at switching time depends on what dependencies the signal has?
Implementing the first semantics is difficult since the system would have to know what signals will be used later. I think this is impossible in general because of undecidability issues. (Grapefruit's approach is to force the user to specify what signals are used later.)
Do you have a compact use case that demonstrates this problem?
Maybe it’s not directly undecidability. What I have in mind is the following: Say, you have a complicated function f :: DTime -> Bool and two signals s1 and s2. Then you form the following signal: (\t -> if f t then s1 else s2) <$> time. In order to know what signals should be started at the beginning, you would have to know whether f will ever yield False or True so that you know which of s1 and s2 will be needed at some point.
Implementing the second semantics would require a single signal having possibly different values when started at different times. This in turn would disallow caching of signal values in mutable variables.
Or at least it would require deep copying some initial snapshot at every restart. But this only applies to completely self-contained signals, since anything that depends on the outer world cannot be restarted by definition.
Why not? You could have a signal which always yields the last key press. This clearly depends on the outer world. Then you construct another signal out of this which counts how often a certain key was pressed. If this latter signal is evaluated several times, it could start counting from different times on, couldn’t it? Best wishes, Wolfgang

So does this mean that whether a signal is started at the beginning or at switching time depends on what dependencies the signal has? No, the situation is even more complicated, since some of its dependencies might be aged through other dependency chains, while others can only be animated through the signal in question. But one has to make it clear what the concept of signal actually means, see below.
What I have in mind is the following: Say, you have a complicated function f :: DTime -> Bool and two signals s1 and s2. Then you form the following signal: (\t -> if f t then s1 else s2) <$> time. In order to know what signals should be started at the beginning, you would have to know whether f will ever yield False or True so that you know which of s1 and s2 will be needed at some point. I see what you mean. Well, if a signal is not needed during sampling, it's simply not aged by Elerea. So unless they are referenced elsewhere, one of them will be inactive.
Why not? You could have a signal which always yields the last key press. This clearly depends on the outer world. Then you construct another signal out of this which counts how often a certain key was pressed. If this latter signal is evaluated several times, it could start counting from different times on, couldn't it? I think the concept of signal and signal transformer is getting mixed up here. The problem with restarting is that we don't know the boundaries. In the case of Yampa everything is clear, since we are dealing with signal functions that have an explicit entry and exit point. But if we're working with the signals themselves, the inputs are not accessible any more. Considering this, what would restarting a key count signal mean?
In the end, the question is if there is some simple intuitive ('least surprise') semantics that fits the behaviour of Elerea, maybe with some adjustments here and there, and what it could possibly look like. Gergely -- http://www.fastmail.fm - Same, same, but different...

Am Donnerstag, 16. April 2009 10:06 schrieb Patai Gergely:
unsafePerformIO is apparently never inlined, i.e. each instance is executed once, so sharing works as desired
But expressions that use unsafePerformIO might get inlined.
CSE is no problem either, it even helps if it's performed (and it is with optimisations turned on), since it results in smaller equivalent networks
What about evaluation time? If I remember correctly, the values of signals depend on the time when the signal expressions are evaluated. So evaluating them multiple times might lead to different behavior. Is this correct?
I think we can expect it to be fairly well-behaving, because the 'side effect' of Elerea primitives is basically the same as that of pure values in general: upon evaluation a value is created in the memory and we get a reference to it.
But in contrast to ordinary values, you have an internal mutable state, don’t you? Best wishes, Wolfgang

On the other hand, I hadn't got round to cleaning up the interface, let alone firming the theoretical foundations, so perhaps this isn't an exception to your rule?-) But I thought I'd mention it on the topic of "other FRP libraries", with variations of approach/concepts. Thanks for the pointers. I remember coming across this one earlier, but I forgot about it. Your description suggests that FunWorlds has a lot in common with Elerea. It's mostly the interface that looks different. Apparently I'll have to play with this one too. :)
Gergely -- http://www.fastmail.fm - The professional email service

I will test it on a couple of machines, desktops and laptops. Try using a sensible nonzero value with threadDelay. Apparently it brings CPU usage down under Windows while retaining smoothness. However, increasing it from zero results in jerkiness under Linux...
If you take a look what Yampa does: it hides signals and only exposes signal functions. But that means that the FRP engine itself could indeed use mutable variables for its signals, as long as during the evaluation of the circuit at time T no side effects should occur; the side effects should take place when the simulation is advanced to T+dT, which is done after the circuit is fully evaluated at time T. Assuming that everything works as intended, Elerea is indeed free of side effects. As for the underlying engine, I was also considering the Hume language, but it has an unpleasant property that every box (analogous to the signal functions of Yampa) has an implicit delay. In fact, Elerea can be regarded as some kind of delayless Hume if we squint enough.
I'm only a bit worried about your automatic insertion of delays; this might break referential transparency at time T, since it depends on the order in which the nodes in the circuit are evaluated no? The latter could be problematic when doing evaluation in parallel on multiple cores I guess. Oh, it's problematic enough even on a single core. ;) If the network changes at one place, it might affect evaluation order in a way that delays can start wandering around in dependency cycles far away. It could be interesting to analyse the effects of this behaviour. Either way, this is just a convenience feature for applications where it makes little difference. I was also thinking about providing two versions of the library: one inserting delays, and another giving an error instead.
By the way, it's also in the plans to visualise the network. Gergely -- http://www.fastmail.fm - IMAP accessible web-mail

That's rather amazing. Is this the first FRP lib that does this? I can't be 100% certain, but it is very likely. It's nearly trivial to do with my approach, since every node of the network is a mutable variable, and I can simply mark the nodes I have already visited in the current superstep. It's not exactly the right thing to do from a theoretical standpoint, but as I said, you can avoid it at will, so I figured it won't do much harm in an experimental library. :)
-- http://www.fastmail.fm - Access your email from home and the web

Am Freitag, 10. April 2009 18:41 schrieb Patai Gergely:
is based on some unsafePerformIO dark magic (that might easily break depending on many factors)
I wonder if this breaks referential transparency. Say, you define a signal s and use s twice in some expression. s may be evaluated once and it maybe evaluated twice. Does this make a difference? In the Haddock docs, you say that repeated evaluation of the same value is prevented by caching. However, caching probably only works if the signal in question is only evaluated once. If it’s evaluated twice, the second “instance” probably cannot reuse cached values of the first “instance”. Is it possible that thereby the second “instance” has different values than the first one? A well known FRP problem is that the values of a signal with internal state (an accumulating signal) depend on the time when the signal is started, i.e., when accumulation starts. When are your signals started? At evaluation time? If yes, doesn’t this mean that the meaning of a signal depends on evaluation time so that evaluating the same signal expression twice actually results in two different signals? Best wishes, Wolfgang
participants (10)
-
Achim Schneider
-
Bulat Ziganshin
-
Claus Reinke
-
Conal Elliott
-
Joe Fredette
-
Lennart Augustsson
-
minh thu
-
Patai Gergely
-
Peter Verswyvelen
-
Wolfgang Jeltsch