
Luke Palmer wrote:
The workhorse of our game has so far been "generalized differentials". While not entirely rigorous, they have provided a very nice framework in which to express our thoughts and designs, and are very good at "highly dynamic" situations which appear in games. For example, with arrows it is painful to maintain a list of moving actors such that can be added and removed. With differentials this is quite natural.
Interesting! I've experimented with writing a game in Haskell recently, albeit in a rather traditional imperative style, and I have a few thoughts on what your style does and does not accomplish.
A differential is implemented as a function that takes a timestep and returns an update function. Don't expose the D constructor; step is okay to expose, it's kind of a generalized "linear approximation".
newtype D a = D { step :: Double -> a -> a }
instance Monoid (D a) where mempty = D (const id) mappend da db = D (\dt -> step da dt . step db dt)
Given a differential for a component of a value, we can construct a differential for that value.
accessor :: Acc.T s a -> D a -> D s accessor acc da = D (Acc.modify acc . step da)
product :: D a -> D b -> D (a, b) product da db = D (\dt (x,y) -> (step da dt x, step db dt y))
comonad :: (Comonad w) => (w a -> D a) -> D (w a) comonad f = D (\dt -> let h w = step (f w) dt (extract w) in extend h)
I appears to me that your differentials are "just" functional references / state updates (a -> a) , the interesting conceptual interpretation notwithstanding. The telltale sign is that your main logic is given by dGameState , which is, in essence, just a function that updates the global game state:
dGameState :: Input -> D GameState dGameState = ... -- built out of simpler Ds of its components
On one hand, this is disappointing because it's all too close to the imperative style, but on the other hand, the idea of treating state changes as *first class values* gives rise to powerful combinators like comonad and clearly exceeds what imperative languages usually offer. In a sense, it is quite ironic that imperative languages don't abstract over what they are best at doing, namely updating state. The interpretation as differentials is, unfortunately, not fundamental. The only part where it would come to fruition, namely the definition of continuous functions
The arrow we are using is implemented in terms of differentials:
data Continuous a b = forall s. Continuous s (s -> a -> (b, D s))
mainGame = proc input -> do gameState <- integral initialGameState -< dGameState input returnA -< drawGameState gameState
only appears as "driver code" at the very end and does not have any impact; you can just as well implement your game loop in the traditional imperative fashion: mainGame = loop initialGameState where loop s = do i <- getInput s' <- dGameState i dt s drawGameState s' loop s' In fact, it is preferable to do so because physics simulations usually require a fixed time step dt which is slightly tricky to synchronize to the clock on the wall, see also Glenn Fiedler. Fix your timestep! http://gafferongames.com/game-physics/fix-your-timestep/ Arrows don't help with that, in fact, it is difficult to do this with arrow style FRP! That's because FRP tends to mingle game state updates and drawing, but you have to separate them if you want decent interpolation. Thus, it is no accident that your main loop factorizes into updating and drawing, and that the arrows pretty much disappear from your code, Luke. So, in my opinion, this style is a traditional imperative approach, though with a proper and powerful abstraction for state updates. But since we're also on a quest to find the "functional game programming nirvana", can we do better? I have no answer, but for starters, I think I can explicitly describe two qualities that FRP has and that are not available in Luke's approach: 1) In FRP, there is no global *type* GameState that stores the whole game state. Rather, the game state is implicit in the collection of "active" computations. This is also why state updating and drawing is woven together in FRP, which is good syntactically, but hard to disentangle for interpolation. 2) In FRP, all dependencies that may influence the evolution of a value in time are made explicit in its definition. In contrast, a state update can change *any* value later on; the only "protection" against unwanted change are the combinators. For instance, product :: D a -> D b -> D (a,b) guarantees that the D b argument cannot change the a component of the pair (a,b) . Maybe the "nirvana" is closer if we could somehow incorporate one of these. Note that these are purely "syntactic" qualities. In fact, I am convinced that it's not a good idea to focus on the semantics of FRP, the key focus should be on the syntax, on the way of expressing a given thought in "computer words". Regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com