FRP for game programming / artifical life simulation

I have an interest in both game programming and artificial life. I have recently stumbled on Haskell and would like to take a stab at programming a simple game using FRP such as YAMPA or Reactive but I am stuck. I am not certain which one I should choose. It seems that Reactive is more active but is it suitable for game programming. Also has anyone attempted to implement neural networks using FRP if so again which of these two approaches to FRP would you suggest?

On Wed, Apr 21, 2010 at 4:47 PM, Ben Christy
I have an interest in both game programming and artificial life. I have recently stumbled on Haskell and would like to take a stab at programming a simple game using FRP such as YAMPA or Reactive but I am stuck. I am not certain which one I should choose. It seems that Reactive is more active but is it suitable for game programming. Also has anyone attempted to implement neural networks using FRP if so again which of these two approaches to FRP would you suggest?
I am in the process of writing a game using FRP. I haven't followed reactive in a while, but last I checked it had some rather annoying issues, such as joinE (monad join on events) not working and an open space leak. So we are using a Yampa-like approach, but not specifically Yampa. However most of the game logic is *not* in AFRP ("arrowized" FRP) style, it is just there to give a nice foundation and top level game loop, playing much the same role as IO does in many Haskell programs (but it is implemented purely!). The workhorse of our game has so far been "generalized differentials". While not entirely rigorous, they have provided a very nice framework in which to express our thoughts and designs, and are very good at "highly dynamic" situations which appear in games. For example, with arrows it is painful to maintain a list of moving actors such that can be added and removed. With differentials this is quite natural. I haven't published the differential library yet, I am waiting until we have used them enough to discover essential techniques and find a nice bases for primitives. But I will give a sketch here. Let the types be your guide, as I am implementing from memory without a compiler :-P
import qualified Data.Accessor.Basic as Acc import Data.VectorSpace import Control.Comonad
A differential is implemented as a function that takes a timestep and returns an update function. Don't expose the D constructor; step is okay to expose, it's kind of a generalized "linear approximation".
newtype D a = D { step :: Double -> a -> a }
instance Monoid (D a) where mempty = D (const id) mappend da db = D (\dt -> step da dt . step db dt)
Given a differential for a component of a value, we can construct a differential for that value.
accessor :: Acc.T s a -> D a -> D s accessor acc da = D (Acc.modify acc . step da)
Given a differential for each component of a tuple, we can find the differential for the tuple.
product :: D a -> D b -> D (a, b) product da db = D (\dt (x,y) -> (step da dt x, step db dt y))
A differential can depend on the current value.
dependent :: (a -> D a) -> D a dependent f = D (\dt x -> step (f x) dt x)
Vectors can be treated directly as differentials over themselves.
vector :: (VectorSpace v, Scalar v ~ Double) => v -> D v vector v = D (\dt x -> x ^+^ dt *^ v)
Impulses allow non-continuous "burst" changes, such as adding/removing an element from a list of actors. This is the only function that bugs me. Incorrectly using it you can determine the framerate, which is supposed be hidden. But if used correctly; i.e. only trigger them on passing conditions, they can be quite handy. But my eyes and ears are open for alternatives.
impulse :: (a -> a) -> D a impulse f = D (const f)
If we can can find the differential for an element of some comonad given its context, we can find the differential for the whole structure. (Our "game world" is a comonad, that's why this is in here)
comonad :: (Comonad w) => (w a -> D a) -> D (w a) comonad f = D (\dt -> let h w = step (f w) dt (extract w) in extend h)
I add new primitives at the drop of a hat. I would like to find a nice combinator basis, but as yet, one hasn't jumped out at me. It might require some tweaking of the concept. The arrow we are using is implemented in terms of differentials:
data Continuous a b = forall s. Continuous s (s -> a -> (b, D s))
instance Category Continuous where id = Continuous () (\() x -> (x, mempty)) Continuous sg0 g . Continuous sf0 f = MkC (sg0,sf0) $ \(sg,sf) x -> let !(y, df) = f sf x -- mind the strict patterns !(z, dg) = g sg y in (z, product dg df)
Exercise: implement the Arrow and ArrowLoop instances. And here is where it comes together. Integration over generalized differentials is a continuous arrow:
integral :: Continuous (D a) a integral a0 = Continuous a0 (,)
So our game loop looks something like:
dGameState :: Input -> D GameState dGameState = ... -- built out of simpler Ds of its components
mainGame = proc input -> do gameState <- integral initialGameState -< dGameState input returnA -< drawGameState gameState
This is my first experience with functional game programming, and so far I love it! It makes so much more sense than the imperative alternative. But the techniques are quite new and different as well, and sometimes it takes a lot of thinking to figure out how to do something that would be obvious for an experienced imperative game programmer. But I consider it a virtue: it's a chance to re-evaluate all the nonsense we've built up over the years and pioneer the field all over again. I hope this helps. If you go with a different approach, please write about it! Luke

This is really good stuff, Luke. I am interested in learning more, especially in seeing examples or actual game code that implement the more common parts of a game. I build a game ("silkworm") in Haskell that was one of my first Haskell programs. The code was not pretty, and I always felt there was a better way. It seems you are on to a better way. When you're ready, I'll be watching for the announcement ;) Regards, Duane Johnson On Apr 21, 2010, at 6:39 PM, Luke Palmer wrote:
On Wed, Apr 21, 2010 at 4:47 PM, Ben Christy
wrote: I have an interest in both game programming and artificial life. I have recently stumbled on Haskell and would like to take a stab at programming a simple game using FRP such as YAMPA or Reactive but I am stuck. I am not certain which one I should choose. It seems that Reactive is more active but is it suitable for game programming. Also has anyone attempted to implement neural networks using FRP if so again which of these two approaches to FRP would you suggest?
I am in the process of writing a game using FRP. I haven't followed reactive in a while, but last I checked it had some rather annoying issues, such as joinE (monad join on events) not working and an open space leak. So we are using a Yampa-like approach, but not specifically Yampa. However most of the game logic is *not* in AFRP ("arrowized" FRP) style, it is just there to give a nice foundation and top level game loop, playing much the same role as IO does in many Haskell programs (but it is implemented purely!).
The workhorse of our game has so far been "generalized differentials". While not entirely rigorous, they have provided a very nice framework in which to express our thoughts and designs, and are very good at "highly dynamic" situations which appear in games. For example, with arrows it is painful to maintain a list of moving actors such that can be added and removed. With differentials this is quite natural.
I haven't published the differential library yet, I am waiting until we have used them enough to discover essential techniques and find a nice bases for primitives. But I will give a sketch here. Let the types be your guide, as I am implementing from memory without a compiler :-P
import qualified Data.Accessor.Basic as Acc import Data.VectorSpace import Control.Comonad
A differential is implemented as a function that takes a timestep and returns an update function. Don't expose the D constructor; step is okay to expose, it's kind of a generalized "linear approximation".
newtype D a = D { step :: Double -> a -> a }
instance Monoid (D a) where mempty = D (const id) mappend da db = D (\dt -> step da dt . step db dt)
Given a differential for a component of a value, we can construct a differential for that value.
accessor :: Acc.T s a -> D a -> D s accessor acc da = D (Acc.modify acc . step da)
Given a differential for each component of a tuple, we can find the differential for the tuple.
product :: D a -> D b -> D (a, b) product da db = D (\dt (x,y) -> (step da dt x, step db dt y))
A differential can depend on the current value.
dependent :: (a -> D a) -> D a dependent f = D (\dt x -> step (f x) dt x)
Vectors can be treated directly as differentials over themselves.
vector :: (VectorSpace v, Scalar v ~ Double) => v -> D v vector v = D (\dt x -> x ^+^ dt *^ v)
Impulses allow non-continuous "burst" changes, such as adding/removing an element from a list of actors. This is the only function that bugs me. Incorrectly using it you can determine the framerate, which is supposed be hidden. But if used correctly; i.e. only trigger them on passing conditions, they can be quite handy. But my eyes and ears are open for alternatives.
impulse :: (a -> a) -> D a impulse f = D (const f)
If we can can find the differential for an element of some comonad given its context, we can find the differential for the whole structure. (Our "game world" is a comonad, that's why this is in here)
comonad :: (Comonad w) => (w a -> D a) -> D (w a) comonad f = D (\dt -> let h w = step (f w) dt (extract w) in extend h)
I add new primitives at the drop of a hat. I would like to find a nice combinator basis, but as yet, one hasn't jumped out at me. It might require some tweaking of the concept.
The arrow we are using is implemented in terms of differentials:
data Continuous a b = forall s. Continuous s (s -> a -> (b, D s))
instance Category Continuous where id = Continuous () (\() x -> (x, mempty)) Continuous sg0 g . Continuous sf0 f = MkC (sg0,sf0) $ \(sg,sf) x -> let !(y, df) = f sf x -- mind the strict patterns !(z, dg) = g sg y in (z, product dg df)
Exercise: implement the Arrow and ArrowLoop instances.
And here is where it comes together. Integration over generalized differentials is a continuous arrow:
integral :: Continuous (D a) a integral a0 = Continuous a0 (,)
So our game loop looks something like:
dGameState :: Input -> D GameState dGameState = ... -- built out of simpler Ds of its components
mainGame = proc input -> do gameState <- integral initialGameState -< dGameState input returnA -< drawGameState gameState
This is my first experience with functional game programming, and so far I love it! It makes so much more sense than the imperative alternative. But the techniques are quite new and different as well, and sometimes it takes a lot of thinking to figure out how to do something that would be obvious for an experienced imperative game programmer. But I consider it a virtue: it's a chance to re-evaluate all the nonsense we've built up over the years and pioneer the field all over again.
I hope this helps. If you go with a different approach, please write about it!
Luke _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I agree, thank you for the info.
On Thu, Apr 22, 2010 at 9:57 PM, Duane Johnson
This is really good stuff, Luke. I am interested in learning more, especially in seeing examples or actual game code that implement the more common parts of a game. I build a game ("silkworm") in Haskell that was one of my first Haskell programs. The code was not pretty, and I always felt there was a better way. It seems you are on to a better way.
When you're ready, I'll be watching for the announcement ;)
Regards, Duane Johnson
On Apr 21, 2010, at 6:39 PM, Luke Palmer wrote:
On Wed, Apr 21, 2010 at 4:47 PM, Ben Christy
wrote: I have an interest in both game programming and artificial life. I have
recently stumbled on Haskell and would like to take a stab at programming a
simple game using FRP such as YAMPA or Reactive but I am stuck. I am not
certain which one I should choose. It seems that Reactive is more active but
is it suitable for game programming. Also has anyone attempted to implement
neural networks using FRP if so again which of these two approaches to FRP
would you suggest?
I am in the process of writing a game using FRP. I haven't followed reactive in a while, but last I checked it had some rather annoying issues, such as joinE (monad join on events) not working and an open space leak. So we are using a Yampa-like approach, but not specifically Yampa. However most of the game logic is *not* in AFRP ("arrowized" FRP) style, it is just there to give a nice foundation and top level game loop, playing much the same role as IO does in many Haskell programs (but it is implemented purely!).
The workhorse of our game has so far been "generalized differentials". While not entirely rigorous, they have provided a very nice framework in which to express our thoughts and designs, and are very good at "highly dynamic" situations which appear in games. For example, with arrows it is painful to maintain a list of moving actors such that can be added and removed. With differentials this is quite natural.
I haven't published the differential library yet, I am waiting until we have used them enough to discover essential techniques and find a nice bases for primitives. But I will give a sketch here. Let the types be your guide, as I am implementing from memory without a compiler :-P
import qualified Data.Accessor.Basic as Acc
import Data.VectorSpace
import Control.Comonad
A differential is implemented as a function that takes a timestep and returns an update function. Don't expose the D constructor; step is okay to expose, it's kind of a generalized "linear approximation".
newtype D a = D { step :: Double -> a -> a }
instance Monoid (D a) where
mempty = D (const id)
mappend da db = D (\dt -> step da dt . step db dt)
Given a differential for a component of a value, we can construct a differential for that value.
accessor :: Acc.T s a -> D a -> D s
accessor acc da = D (Acc.modify acc . step da)
Given a differential for each component of a tuple, we can find the differential for the tuple.
product :: D a -> D b -> D (a, b)
product da db = D (\dt (x,y) -> (step da dt x, step db dt y))
A differential can depend on the current value.
dependent :: (a -> D a) -> D a
dependent f = D (\dt x -> step (f x) dt x)
Vectors can be treated directly as differentials over themselves.
vector :: (VectorSpace v, Scalar v ~ Double) => v -> D v
vector v = D (\dt x -> x ^+^ dt *^ v)
Impulses allow non-continuous "burst" changes, such as adding/removing an element from a list of actors. This is the only function that bugs me. Incorrectly using it you can determine the framerate, which is supposed be hidden. But if used correctly; i.e. only trigger them on passing conditions, they can be quite handy. But my eyes and ears are open for alternatives.
impulse :: (a -> a) -> D a
impulse f = D (const f)
If we can can find the differential for an element of some comonad given its context, we can find the differential for the whole structure. (Our "game world" is a comonad, that's why this is in here)
comonad :: (Comonad w) => (w a -> D a) -> D (w a)
comonad f = D (\dt -> let h w = step (f w) dt (extract w) in extend h)
I add new primitives at the drop of a hat. I would like to find a nice combinator basis, but as yet, one hasn't jumped out at me. It might require some tweaking of the concept.
The arrow we are using is implemented in terms of differentials:
data Continuous a b = forall s. Continuous s (s -> a -> (b, D s))
instance Category Continuous where
id = Continuous () (\() x -> (x, mempty))
Continuous sg0 g . Continuous sf0 f = MkC (sg0,sf0) $ \(sg,sf) x ->
let !(y, df) = f sf x -- mind the strict patterns
!(z, dg) = g sg y in
(z, product dg df)
Exercise: implement the Arrow and ArrowLoop instances.
And here is where it comes together. Integration over generalized differentials is a continuous arrow:
integral :: Continuous (D a) a
integral a0 = Continuous a0 (,)
So our game loop looks something like:
dGameState :: Input -> D GameState
dGameState = ... -- built out of simpler Ds of its components
mainGame = proc input -> do
gameState <- integral initialGameState -< dGameState input
returnA -< drawGameState gameState
This is my first experience with functional game programming, and so far I love it! It makes so much more sense than the imperative alternative. But the techniques are quite new and different as well, and sometimes it takes a lot of thinking to figure out how to do something that would be obvious for an experienced imperative game programmer. But I consider it a virtue: it's a chance to re-evaluate all the nonsense we've built up over the years and pioneer the field all over again.
I hope this helps. If you go with a different approach, please write about it!
Luke _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Luke Palmer wrote:
The workhorse of our game has so far been "generalized differentials". While not entirely rigorous, they have provided a very nice framework in which to express our thoughts and designs, and are very good at "highly dynamic" situations which appear in games. For example, with arrows it is painful to maintain a list of moving actors such that can be added and removed. With differentials this is quite natural.
Interesting! I've experimented with writing a game in Haskell recently, albeit in a rather traditional imperative style, and I have a few thoughts on what your style does and does not accomplish.
A differential is implemented as a function that takes a timestep and returns an update function. Don't expose the D constructor; step is okay to expose, it's kind of a generalized "linear approximation".
newtype D a = D { step :: Double -> a -> a }
instance Monoid (D a) where mempty = D (const id) mappend da db = D (\dt -> step da dt . step db dt)
Given a differential for a component of a value, we can construct a differential for that value.
accessor :: Acc.T s a -> D a -> D s accessor acc da = D (Acc.modify acc . step da)
product :: D a -> D b -> D (a, b) product da db = D (\dt (x,y) -> (step da dt x, step db dt y))
comonad :: (Comonad w) => (w a -> D a) -> D (w a) comonad f = D (\dt -> let h w = step (f w) dt (extract w) in extend h)
I appears to me that your differentials are "just" functional references / state updates (a -> a) , the interesting conceptual interpretation notwithstanding. The telltale sign is that your main logic is given by dGameState , which is, in essence, just a function that updates the global game state:
dGameState :: Input -> D GameState dGameState = ... -- built out of simpler Ds of its components
On one hand, this is disappointing because it's all too close to the imperative style, but on the other hand, the idea of treating state changes as *first class values* gives rise to powerful combinators like comonad and clearly exceeds what imperative languages usually offer. In a sense, it is quite ironic that imperative languages don't abstract over what they are best at doing, namely updating state. The interpretation as differentials is, unfortunately, not fundamental. The only part where it would come to fruition, namely the definition of continuous functions
The arrow we are using is implemented in terms of differentials:
data Continuous a b = forall s. Continuous s (s -> a -> (b, D s))
mainGame = proc input -> do gameState <- integral initialGameState -< dGameState input returnA -< drawGameState gameState
only appears as "driver code" at the very end and does not have any impact; you can just as well implement your game loop in the traditional imperative fashion: mainGame = loop initialGameState where loop s = do i <- getInput s' <- dGameState i dt s drawGameState s' loop s' In fact, it is preferable to do so because physics simulations usually require a fixed time step dt which is slightly tricky to synchronize to the clock on the wall, see also Glenn Fiedler. Fix your timestep! http://gafferongames.com/game-physics/fix-your-timestep/ Arrows don't help with that, in fact, it is difficult to do this with arrow style FRP! That's because FRP tends to mingle game state updates and drawing, but you have to separate them if you want decent interpolation. Thus, it is no accident that your main loop factorizes into updating and drawing, and that the arrows pretty much disappear from your code, Luke. So, in my opinion, this style is a traditional imperative approach, though with a proper and powerful abstraction for state updates. But since we're also on a quest to find the "functional game programming nirvana", can we do better? I have no answer, but for starters, I think I can explicitly describe two qualities that FRP has and that are not available in Luke's approach: 1) In FRP, there is no global *type* GameState that stores the whole game state. Rather, the game state is implicit in the collection of "active" computations. This is also why state updating and drawing is woven together in FRP, which is good syntactically, but hard to disentangle for interpolation. 2) In FRP, all dependencies that may influence the evolution of a value in time are made explicit in its definition. In contrast, a state update can change *any* value later on; the only "protection" against unwanted change are the combinators. For instance, product :: D a -> D b -> D (a,b) guarantees that the D b argument cannot change the a component of the pair (a,b) . Maybe the "nirvana" is closer if we could somehow incorporate one of these. Note that these are purely "syntactic" qualities. In fact, I am convinced that it's not a good idea to focus on the semantics of FRP, the key focus should be on the syntax, on the way of expressing a given thought in "computer words". Regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com

1) In FRP, there is no global *type* GameState that stores the whole game state. Rather, the game state is implicit in the collection of "active" computations. This is also why state updating and drawing is woven together in FRP, which is good syntactically, but hard to disentangle for interpolation.
I disagree somewhat with this. FRP should be thought of like the IO monad, out of which everything that can be lifted, should be, especially the GameState. I like to imagine that the FRP's job is to observe the GameState and reenact changes therein. Some changes take a little while to act out, and the FRP element that is doing the action can signal that it isn't ready for the next transition. Or, if no changes occur, the actors can stand around doing idle animations. Friendly, --Lane

Actually, I believe that many Yampa examples do separate the drawing
from the update... The arrow provides the game data that *can* be
rendered. If you provide interpolators for that game data, you can
still achieve the same as is explained in "fix your timesteps" (in my
own FRP experiments I have an update thread and a render thread).
But IMHO "fix your timestep" still misses an important detail, in that
the delta-time that is measured is the duration of the previous frame,
and it assumed that the next frame will take as long as the previous
(who says that "integrate" from the article won't take longer than
dt?). Now say you are updating at 100 FPS = 10ms, but the next frame
actually takes longer, say 20ms. That actually means that you should
have passed 20ms as the delta-time of the this frame, because the real
time is ahead now! This is really noticeable as little jerky frame
hick-up in the motion. In my first game (1987), I added an estimator
to compute how long the delta-time of the next frame would be, which
results in much smoother motion: you notice that the
frame-sampling-rate drops, but you don't see a frame hick-up. I rarely
see this in modern games, most PC and even console games suffer from
frame hick-up (which could be defined as the real-time moving ahead of
the game-time for a brief moment)
Regarding FRP, I like to look at this as a kind of a data flow system
(a network of "signal transformers", or just "nodes"). However, it is
not clear if you want to "pull" or "push" such a network: if side
effects are not present, there should be no difference in the "game
state" if you push or pull it, but one can be far more optimal than
the other. Of example, nodes connected to an analog joystick would
most likely benefit from a pull approach, since the joystick always
moves a little. But nodes connected to a timer that changes once each
second clearly should benefit from a push approach. Although sometimes
a static "change frequency" could help to determine wether to push or
pull, I believe this can only be determined with profiling. So in a
sense, push or pull should be some kind of attribute...
Okay, a bit off topic :)
On Sun, Apr 25, 2010 at 5:09 PM, Christopher Lane Hinson
1) In FRP, there is no global *type* GameState that stores the whole game state. Rather, the game state is implicit in the collection of "active" computations. This is also why state updating and drawing is woven together in FRP, which is good syntactically, but hard to disentangle for interpolation.
I disagree somewhat with this. FRP should be thought of like the IO monad, out of which everything that can be lifted, should be, especially the GameState.
I like to imagine that the FRP's job is to observe the GameState and reenact changes therein. Some changes take a little while to act out, and the FRP element that is doing the action can signal that it isn't ready for the next transition. Or, if no changes occur, the actors can stand around doing idle animations.
Friendly, --Lane _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Peter Verswyvelen wrote:
Actually, I believe that many Yampa examples do separate the drawing from the update... The arrow provides the game data that *can* be rendered. If you provide interpolators for that game data, you can still achieve the same as is explained in "fix your timesteps" (in my own FRP experiments I have an update thread and a render thread).
But the arrow implementation determines the dt at which the arrows ~ (Time -> a) -> (Time -> b) are sampled, no? The end result of a Yampa arrows is a graphic, after all.
But IMHO "fix your timestep" still misses an important detail, in that the delta-time that is measured is the duration of the previous frame, and it assumed that the next frame will take as long as the previous (who says that "integrate" from the article won't take longer than dt?). Now say you are updating at 100 FPS = 10ms, but the next frame actually takes longer, say 20ms. That actually means that you should have passed 20ms as the delta-time of the this frame, because the real time is ahead now! This is really noticeable as little jerky frame hick-up in the motion. In my first game (1987), I added an estimator to compute how long the delta-time of the next frame would be, which results in much smoother motion: you notice that the frame-sampling-rate drops, but you don't see a frame hick-up. I rarely see this in modern games, most PC and even console games suffer from frame hick-up (which could be defined as the real-time moving ahead of the game-time for a brief moment)
I'm not sure I follow, could you elaborate on what exactly causes the frame hick-up? As far as I understand it, the approach of "fix your time-step" is that you have a physics simulation and a rendering engine. To ensure numerical stability, the physics are calculated with a fixed time step dt which can be larger than the rendering frame rate. In particular, one step of physics simulation should take less than dt real time, because otherwise you're screwed. The graphics engine just draws as fast as possible. To ensure smoothness, it interpolates slightly into the future. The FPS number measures the frequency of drawn graphics, not the rate of physics updates. There may be multiple physics steps per drawing when the latter is slow, or the other way round, when the latter is fast. Regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com
participants (6)
-
Ben Christy
-
Christopher Lane Hinson
-
Duane Johnson
-
Heinrich Apfelmus
-
Luke Palmer
-
Peter Verswyvelen