Getting my feet wet - small browser game

OK, after years of looking and discussing and doing HSoE exercises, I have finally decided that Haskell is far enough into practical usefulness that I should give it a try in a "real" project. The basic idea is a browser game - this touches all the potentially hard issues without miring me too deeply in target platform details. I'd like to lay out my plans here and ask how they are going to work out, and take advice. THE PLAN I'll start with http://haskell.org/haskellwiki/How_to_write_a_Haskell_program and get a toolchain together. I haven't decides which compiler (interpreter?) to choose; I'll probably go for the one that give the least amount of trouble. Next would be library selection. I'm willing to undergo some modest amount of hassle here, since I don't expect all libraries that I need to be mature yet. My preliminary plan is to split the application into a central world simulation process, and satellite processes that accept HTTP requests, feed them into the simulation, read back the results, and generate the response HTML. The interface between simulation and satellite is: * Satellites can read a snapshot of the game data. * Satellites cannot directly write game data. What they can do is to post commands to a blackboard, which are marked as "no more updatable" as soon as the simulation starts executing them. I expect the simulation and the satellites to be separate OS processes, so I'll need a way to marshall command and game data between processes. The simulation will have to store its entire state to disk, too - in theory, it could run forever and never write to disk (and I don't need a database, too), but in practice, I have to plan for the occasional reboot. Since the server will be running Apache for other purposes anyway, and I don't want to force the players to use a URL with a port number, I think I'll set up Apache so that it proxies game-related URLs to the Haskell software. I just hope that Apache doesn't incur much overhead in that mode. I have no idea how to organize software upgrades. The satellites are easy, but how do I make sure that revision N+1 of the simulation can read the marshalled data from revision N? The final software should be efficient. Ideally, the satellites are able to saturate the network card of today's typical cheap rootserver if the simulation is a no-op. I have two data points for a "typical cheap rootserver": * 100 MBit/s Ethernet, 256 MB RAM, 1.2 GHz Celeron (~3 years old) * 1 GBit/s Ethernet, 1 GB RAM, 2.2 GHz Athlon (current) Of course, not needing an RDBMS will give the system a head start efficiency-wise. Comments and suggestions welcome :-) Regards, Jo

Comments and suggestions welcome :-) hi Joachim.
i have some suggestions: apache: use fastcgi instead of hacking an own http-server. http://www.cs.chalmers.se/~bringert/darcs/haskell-fastcgi/doc/ http://www.cs.chalmers.se/~bringert/darcs/cgi-compat/doc/ server: there are virtual linux servers out there, free to rent. some of them are even cheaper than the power-usage of one's old pc (at least compared to speed). if you intend to write a game for thousands of users, who play it 24/7, then it may be comfortable to rent one. (friends of me rented one.) software upgrades: use Read/Show classes instead of Foreign.Marshal, and combine them with version counting data-structures: [code] data MyData = V1 String deriving (Show,Read) read_v1 :: MyData -> String --------- data MyData = V1 String | V2 [String] deriving (Show,Read) read_v1 :: MyData -> String read_v2 :: MyData -> [String] --------- data MyData = V1 String | V2 [String] | V3 [(String,Int)] deriving (Show,Read) -- obsolete: read_v1 :: MyData -> String read_v2 :: MyData -> [String] read_v3 :: MyData -> [(String,Int)] [/code] i've thought about writing a browsergame in haskel, too; but atm, i have no time for (writing) games. - marc Am Montag, 18. Dezember 2006 12:30 schrieb Joachim Durchholz:
OK, after years of looking and discussing and doing HSoE exercises, I have finally decided that Haskell is far enough into practical usefulness that I should give it a try in a "real" project.
The basic idea is a browser game - this touches all the potentially hard issues without miring me too deeply in target platform details.
I'd like to lay out my plans here and ask how they are going to work out, and take advice.
THE PLAN
I'll start with http://haskell.org/haskellwiki/How_to_write_a_Haskell_program and get a toolchain together. I haven't decides which compiler (interpreter?) to choose; I'll probably go for the one that give the least amount of trouble.
Next would be library selection. I'm willing to undergo some modest amount of hassle here, since I don't expect all libraries that I need to be mature yet.
My preliminary plan is to split the application into a central world simulation process, and satellite processes that accept HTTP requests, feed them into the simulation, read back the results, and generate the response HTML. The interface between simulation and satellite is: * Satellites can read a snapshot of the game data. * Satellites cannot directly write game data. What they can do is to post commands to a blackboard, which are marked as "no more updatable" as soon as the simulation starts executing them.
I expect the simulation and the satellites to be separate OS processes, so I'll need a way to marshall command and game data between processes. The simulation will have to store its entire state to disk, too - in theory, it could run forever and never write to disk (and I don't need a database, too), but in practice, I have to plan for the occasional reboot. Since the server will be running Apache for other purposes anyway, and I don't want to force the players to use a URL with a port number, I think I'll set up Apache so that it proxies game-related URLs to the Haskell software. I just hope that Apache doesn't incur much overhead in that mode.
I have no idea how to organize software upgrades. The satellites are easy, but how do I make sure that revision N+1 of the simulation can read the marshalled data from revision N?
The final software should be efficient. Ideally, the satellites are able to saturate the network card of today's typical cheap rootserver if the simulation is a no-op. I have two data points for a "typical cheap rootserver": * 100 MBit/s Ethernet, 256 MB RAM, 1.2 GHz Celeron (~3 years old) * 1 GBit/s Ethernet, 1 GB RAM, 2.2 GHz Athlon (current) Of course, not needing an RDBMS will give the system a head start efficiency-wise.
Comments and suggestions welcome :-)
Regards, Jo
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Marc A. Ziegert schrieb:
Comments and suggestions welcome :-) hi Joachim.
i have some suggestions:
apache: use fastcgi instead of hacking an own http-server. http://www.cs.chalmers.se/~bringert/darcs/haskell-fastcgi/doc/ http://www.cs.chalmers.se/~bringert/darcs/cgi-compat/doc/
Ah, that's excellent. The server in question will have FastCGI installed anyway (actually fcgid, so I'll see what the incompatibilities are).
server: there are virtual linux servers out there, free to rent. some of them are even cheaper than the power-usage of one's old pc (at least compared to speed). if you intend to write a game for thousands of users, who play it 24/7, then it may be comfortable to rent one. (friends of me rented one.)
I'm with a small webhosting company, so the server is there already :-) Of course, if the game ever attracts thousands of players and CPU and RAM usage start to affect other web presences, it should move to its own dedicated server.
software upgrades: use Read/Show classes instead of Foreign.Marshal, and combine them with version counting data-structures:
[code] data MyData = V1 String deriving (Show,Read) read_v1 :: MyData -> String --------- data MyData = V1 String | V2 [String] deriving (Show,Read) read_v1 :: MyData -> String read_v2 :: MyData -> [String] --------- data MyData = V1 String | V2 [String] | V3 [(String,Int)] deriving (Show,Read) -- obsolete: read_v1 :: MyData -> String read_v2 :: MyData -> [String] read_v3 :: MyData -> [(String,Int)] [/code]
I'll try that out and let everybody know how well that worked. Thanks.
i've thought about writing a browsergame in haskel, too; but atm, i have no time for (writing) games.
I've known that feeling for quite a while :-) Regards, Jo

Marc A. Ziegert schrieb:
software upgrades: use Read/Show classes instead of Foreign.Marshal,
I'm having second thoughts here. Wouldn't Show evaluate all thunks of the data Shown? That would mean I couldn't use infinite data structures in data that goes out to disk. I don't think this would be a strong restriction for the communication between simulation and satellites, but I'm pretty sure it would be for doing backups of the simulation. Unfortunately, doing simulation backups is also the area where versioning is probably the harder problem. But I think I can work around that. I'd simply have to write a small upgrade program whenever data structures change, which unmarshalls using the old code and marshalls using the new code. Regards, Jo

jo:
Marc A. Ziegert schrieb:
software upgrades: use Read/Show classes instead of Foreign.Marshal,
I'm having second thoughts here. Wouldn't Show evaluate all thunks of the data Shown? That would mean I couldn't use infinite data structures in data that goes out to disk.
Btw, if you're dumping large structures to disk, using Read/Show is a bad idea :) Use NewBinary, at a minimum, or one of the other serialisation modules (possibly the one used in HAppS based on bytestrings) would be a better option. Read/Show is good for testing that the serialising code works, though. -- Don

As written in my other post, I will need to update data structures that were marshalled to disk. Now I'm wondering how to best prepare for the situation. E.g. one of the common situations is that a single data item gets replaced by a list of items. Now assume that there's a SomeData type that's used across the game, and which gets incompatibly updated SomeData1 (say, instead of containing just a string it turns into a list of strings). The update code would now have to unmarshall a blob of game data, traverse it to find all instances of SomeData, wrap them in a one-element list to turn them into SomeData1s, reconstruct the blob of game data with the SomeData1 items, and marshall the result back out to disk. This sounds as if I'd have to write code for every single data type in the update program just to update a single data type. Is that true, or is there a way around this? Regards, Jo

Hello Joachim, Wednesday, December 20, 2006, 11:22:24 AM, you wrote:
The update code would now have to unmarshall a blob of game data, traverse it to find all instances of SomeData, wrap them in a one-element list to turn them into SomeData1s, reconstruct the blob of game data with the SomeData1 items, and marshall the result back out to disk.
marshalling and unmarshalling code can be geberated automatically using Template Haskell ([1],[2]) or Data.Generics [3]
That would mean I couldn't use infinite data structures in data that goes out to disk.
[1] also supports infinite data structures [1] http://www.cs.helsinki.fi/u/ekarttun/SerTH/SerTH-0.2.tar.gz [2] http://haskell.org/haskellwiki/Library/AltBinary http://haskell.org/haskellwiki/Library/Streams [3] http://members.cox.net/stefanor/genericserialize-0.0.tar.gz -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On 12/20/06, Bulat Ziganshin
Hello Joachim,
Wednesday, December 20, 2006, 11:22:24 AM, you wrote:
The update code would now have to unmarshall a blob of game data, traverse it to find all instances of SomeData, wrap them in a one-element list to turn them into SomeData1s, reconstruct the blob of game data with the SomeData1 items, and marshall the result back out to disk.
marshalling and unmarshalling code can be geberated automatically using Template Haskell ([1],[2]) or Data.Generics [3]
That would mean I couldn't use infinite data structures in data that goes out to disk.
[1] also supports infinite data structures
No, it only supports cyclic data structures. -- Cheers, Lemmih

Hi Jo,
You seem to be describing SYB and not knowing it:
http://homepages.cwi.nl/~ralf/syb1/
That basically does exactly what you've requested, in terms of
traversing all items when only one matters. That said, serialisation
is still a hard problem - think long and hard before picking a data
format.
With Yhc.Core I used Drift to derve Binary instances, keep a version
tag, and if the version tags mismatch refuse to load the data.
Thanks
Neil
On 12/20/06, Joachim Durchholz
As written in my other post, I will need to update data structures that were marshalled to disk.
Now I'm wondering how to best prepare for the situation. E.g. one of the common situations is that a single data item gets replaced by a list of items.
Now assume that there's a SomeData type that's used across the game, and which gets incompatibly updated SomeData1 (say, instead of containing just a string it turns into a list of strings). The update code would now have to unmarshall a blob of game data, traverse it to find all instances of SomeData, wrap them in a one-element list to turn them into SomeData1s, reconstruct the blob of game data with the SomeData1 items, and marshall the result back out to disk. This sounds as if I'd have to write code for every single data type in the update program just to update a single data type. Is that true, or is there a way around this?
Regards, Jo
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Neil Mitchell schrieb:
You seem to be describing SYB and not knowing it: http://homepages.cwi.nl/~ralf/syb1/
That basically does exactly what you've requested, in terms of traversing all items when only one matters.
Yup, that's exactly what I was looking for. Actually I had seen it a while ago, but didn't remember it now. Thanks. One thing that might become a problem is that the "Scrap your boilerplate" approach seems to work only in GHC. There's nothing wrong with GHC, but it sounds like I'm committing to a specific compiler right from the start. I'd like to keep the number of choices as high as possible... and besides, if the compiler gives me an error message, or the generated code does unexpected things, I'd like to have the possibility to cross-check with a different compiler. So have other compilers picked up SYB support yet? It might be not feasible though. The papers mention that you can't serialize (well, actually unserialize) function values with it. For the envisioned update-through-marshalling process, this would prevent me from ever using function values in data that needs to be persistent, and that's quite a harsh restriction.
That said, serialisation is still a hard problem - think long and hard before picking a data format.
What would be the problems of choosing the wrong one?
With Yhc.Core I used Drift to derve Binary instances, keep a version tag, and if the version tags mismatch refuse to load the data.
Links? Regards, Jo

On Wed, Dec 20, 2006 at 07:30:02PM +0100, Joachim Durchholz wrote:
Neil Mitchell schrieb:
You seem to be describing SYB and not knowing it: http://homepages.cwi.nl/~ralf/syb1/
That basically does exactly what you've requested, in terms of traversing all items when only one matters.
Yup, that's exactly what I was looking for. Actually I had seen it a while ago, but didn't remember it now. Thanks.
You spoke of changing each element to something of a different type. I don't think SYB can do that. A solution (and a portable one too) might be to parameterize all these types, and make them instances of Functor, Foldable and Traversable (see Data.Foldable and Data.Traversable in the GHC 6.6 documentation). You'd have the labour or writing all those instances (though the trivial instances of Functor and Foldable would suffice). But once you've done that, a range of different traversals would be available.
It might be not feasible though. The papers mention that you can't serialize (well, actually unserialize) function values with it. For the envisioned update-through-marshalling process, this would prevent me from ever using function values in data that needs to be persistent, and that's quite a harsh restriction.
That's hard to avoid, unless you have a data representation of the functions you're interested in.

Ross Paterson schrieb:
It might be not feasible though. The papers mention that you can't serialize (well, actually unserialize) function values with it. For the envisioned update-through-marshalling process, this would prevent me from ever using function values in data that needs to be persistent, and that's quite a harsh restriction.
That's hard to avoid, unless you have a data representation of the functions you're interested in.
I could encode functions by their name. I don't think that would scale to a large application with multiple developers, but it's not this kind of project anyway. I'd be reluctant to accept that way if it means adding boilerplate code for every function that might ever be serialized. Since I'm planning to serialize an entire application, I fear that I'd need that boilerplate code for 90% of all functions, so even a single line of boilerplate might be too much. Regards, Jo

On Dec 20, 2006, at 2:37 PM, Joachim Durchholz wrote:
Ross Paterson schrieb:
It might be not feasible though. The papers mention that you can't serialize (well, actually unserialize) function values with it. For the envisioned update-through-marshalling process, this would prevent me from ever using function values in data that needs to be persistent, and that's quite a harsh restriction. That's hard to avoid, unless you have a data representation of the functions you're interested in.
I could encode functions by their name. I don't think that would scale to a large application with multiple developers, but it's not this kind of project anyway. I'd be reluctant to accept that way if it means adding boilerplate code for every function that might ever be serialized. Since I'm planning to serialize an entire application, I fear that I'd need that boilerplate code for 90% of all functions, so even a single line of boilerplate might be too much.
Let me just say here that what you are attempting to do sounds very difficult. As I understand, you want to be able to serialize an entire application at some (predetermined / arbitrary?) point, change some of its code and/or data structures, de-serialize and run the thing afterwards. Doing something like this without explicit language support is going to be hard, especially in a fairly static language like Haskell. I would think Smalltalk, Erlang, or something from the Lisp/Scheme family would be more suitable for this sort of work (caveat, I have little experience with any of these languages). Also, take a look here (http://lambda-the-ultimate.org/node/526) for some related discussion.
Regards, Jo
Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

Robert Dockins schrieb:
Let me just say here that what you are attempting to do sounds very difficult. As I understand, you want to be able to serialize an entire application at some (predetermined / arbitrary?) point, change some of its code and/or data structures, de-serialize and run the thing afterwards.
Right. Though it's not too far out of the ordinary. Haskell being a rather orthogonal language, I had hoped that I can "simply serialize" any data structure.
Doing something like this without explicit language support is going to be hard, especially in a fairly static language like Haskell.
Exactly. I was intrigued when I found that libraries can do quite a lot serialization in Haskell - that gives Haskell an excellent rating in what could be called "aspect-orientedness". It doesn't help to serialize functions values or thunks, though.
I would think Smalltalk, Erlang, or something from the Lisp/Scheme family would be more suitable for this sort of work (caveat, I have little experience with any of these languages).
Erlang is actually on my list of potential alternatives. It has different advantages than Haskell, though, and right now, I'm willing to try Haskell.
Also, take a look here (http://lambda-the-ultimate.org/node/526) for some related discussion.
I'm not sure whether that relates to my project. I let network connections be handled by Apache and FastCGI, so I'm leaving out a whole lot of library issues that hit that reported project really hard. Regards, Jo

Hi
With Yhc.Core I used Drift to derve Binary instances, keep a version tag, and if the version tags mismatch refuse to load the data.
Links?
http://repetae.net/~john/computer/haskell/DrIFT/ http://darcs.haskell.org/yhc/src/libraries/general/Yhc/General/Binary.hs Thats Drift which can derive binary instances (pick GhcBinary), and then a module which can work with the derived classes. Warning, mild hacking was required to get it going on Windows. Thanks Neil

| One thing that might become a problem is that the "Scrap your | boilerplate" approach seems to work only in GHC. I don't think so. Other compilers might not support "deriving Data", but you can always write the instance by hand. Simon

Simon Peyton-Jones schrieb:
| One thing that might become a problem is that the "Scrap your | boilerplate" approach seems to work only in GHC.
I don't think so. Other compilers might not support "deriving Data", but you can always write the instance by hand.
How much boilerplate would be needed in that case? As far as I understood the web site, it would be around a line of code per data type, independently of the number of HOFs that need to iterate over the data structures - is that correct? I understand that Drift is a kind of preprocessor / code generator. Could it be used to generate the necessary boilerplate? Regards, Jo

Hello Simon, Thursday, December 21, 2006, 12:02:22 PM, you wrote:
| One thing that might become a problem is that the "Scrap your | boilerplate" approach seems to work only in GHC.
I don't think so. Other compilers might not support "deriving Data", but you can always write the instance by hand.
... what he need to avoid :) i think that Drift can be used for it, if he will ever found any compiler better than GHC :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Hello Joachim, Wednesday, December 20, 2006, 9:30:02 PM, you wrote:
One thing that might become a problem is that the "Scrap your boilerplate" approach seems to work only in GHC. There's nothing wrong with GHC, but it sounds like I'm committing to a specific compiler right from the start. I'd like to keep the number of choices as high as possible...
there are really no choices for real development. many libraries, language extensions and techniques are for ghc only
and besides, if the compiler gives me an error message, or the generated code does unexpected things, I'd like to have the possibility to cross-check with a different compiler.
you can use Hugs to develop programs - it has much faster compile times, slightly better error messages and good level of language (extensions) compatibility with ghc. parts of programs that are not compatible with Hugs may be conditionally compiled using #ifdef GHC
So have other compilers picked up SYB support yet?
SYB and Template Haskell are just examples of techniques that make GHC obvious choice. although. SYB may be supported by Hugs too, i'm not sure
That said, serialisation is still a hard problem - think long and hard before picking a data format.
What would be the problems of choosing the wrong one?
With Yhc.Core I used Drift to derve Binary instances, keep a version tag, and if the version tags mismatch refuse to load the data.
unless you need specific data format (say for compatibility with existing data sources) you may easily derive Binary instances either with Drift or with TH using libraries i mentioned in previous letter. http://repetae.net/john/computer/haskell/DrIFT/drop/DrIFT-2.1.1.tar.gz http://repetae.net/john/computer/haskell/DrIFT/drift.ps -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Hi
there are really no choices for real development. many libraries, language extensions and techniques are for ghc only
I develop everything in Hugs, including the Yhc compiler itself. Hugs is great. There are lots of extensions in GHC, but Haskell 98 is a perfectly useable language!
So have other compilers picked up SYB support yet?
SYB and Template Haskell are just examples of techniques that make GHC obvious choice. although. SYB may be supported by Hugs too, i'm not sure
GHC is the only one with inbuilt SYB support. Hugs has deriving Typeable (I think), but not Data. Thanks Neil

On Dec 21, 2006, at 15:53 , Neil Mitchell wrote:
Hi
there are really no choices for real development. many libraries, language extensions and techniques are for ghc only
I develop everything in Hugs, including the Yhc compiler itself. Hugs is great. There are lots of extensions in GHC, but Haskell 98 is a perfectly useable language!
I must second this opinion. There's this (false) perception that you need all kinds of extensions to make Haskell usable. It's simply not true. Certain extensions can make your life easier, but that's it. -- Lennart

Lennart Augustsson wrote:
I must second this opinion. There's this (false) perception that you need all kinds of extensions to make Haskell usable. It's simply not true. Certain extensions can make your life easier, but that's it.
To write code in Haskell, this is true. However, one of the wonderful things about Haskell is how much the type system helps you. And if you want the type system to help you even more [which, as a programmer having suffered from dynamic typing too long, I really want], those extensions are really needed. In other words, you can program in Haskell just fine without extensions. But if you want that "next level" in type safety, extensions is where it's at, at least for the kind of code I write. Jacques

Hi
In other words, you can program in Haskell just fine without extensions. But if you want that "next level" in type safety, extensions is where it's at, at least for the kind of code I write.
What level of "safety" do these type extensions give you? The biggest runtime crasher is probably pattern match failings, something that most of these type extensions don't catch at all! They do give you some safety, but not a massive amount, and not in the places where it would be truely useful. Thanks Neil

Neil Mitchell wrote:
In other words, you can program in Haskell just fine without extensions. But if you want that "next level" in type safety, extensions is where it's at, at least for the kind of code I write.
What level of "safety" do these type extensions give you? Check out many, many, many of Oleg's postings to the Haskell list. His code expresses this much better than my words can.
The biggest runtime crasher is probably pattern match failings, something that most of these type extensions don't catch at all! Array out-of-bounds, fromJust, head on an empty list, and pattern-match failures are in my list of things I wish the type system could help me with. And sometimes it can [again, see Oleg's posts]. But is definitely where I am *eager* to see developments.
They do give you some safety, but not a massive amount, and not in the places where it would be truely useful. Unfortunately, I agree. But I'll still take what I can get!
Jacques

Jacques Carette wrote:
Neil Mitchell wrote:
The biggest runtime crasher is probably pattern match failings, something that most of these type extensions don't catch at all!
Array out-of-bounds, fromJust, head on an empty list, and pattern-match failures are in my list of things I wish the type system could help me with. And sometimes it can [again, see Oleg's posts]. But is definitely where I am *eager* to see developments.
If I understand things correctly (admittedly, a big "if" ;) ) this is kind of the point of dependent types. A type is a set - when you put a type on an expression, you're asserting that the expression evaluates to a member of that set. So, "foo :: Integer -> Rational", among other things, asserts that the result of "foo x" (given that "x" is a member of the set of Integers) is a member of the set of Rationals. But why stop there? Why not be able to say that "foo x" is a /positive/ rational, or a non-negative rational between 0 and 1? Or, since we're just describing sets, why not explictly allow the set to depend on x? Why not let the result type be "the set of rationals r such that 1/r == x"? The Curry-Howard isomorphism leads to all of that. A program that outputs some value "x" is (isomorphic to) a proof that x is a member of some type. We just generally lack a sufficiently powerful type grammar to express that directly in the program. Dependent types let you make the types of output /depend/ on the actual values of the input. Check out Conor McBride and James McKinna's paper "The View From the Left", and their work on the Epigram language to see where they've taken this... http://www.dcs.st-andrews.ac.uk/~james/ - fascinating stuff. I agree with you, though - I'm very eager to see further developments along these lines. It's the main reason I started learning Haskell, and I'm absolutely convinced that functional programming and this kind of increasingly strong typing are the way of the future for programming. -- ----- What part of "ph'nglui bglw'nafh Cthulhu R'lyeh wagn'nagl fhtagn" don't you understand?

Jacques Carette wrote:
Array out-of-bounds, fromJust, head on an empty list, and pattern-match failures are in my list of things I wish the type system could help me with. And sometimes it can [again, see Oleg's posts]. But is definitely where I am *eager* to see developments. I agree with you, though - I'm very eager to see further developments along these lines. It's the main reason I started learning Haskell, and I'm absolutely convinced that functional programming and this kind of increasingly strong typing are the way of the future for programming. What must be remembered is that full dependent types are NOT needed to get a lot of the benefits of dependent-like types. This is what some of Oleg's type gymnastics shows (and others too). My interest right now
Yes, dependent types have a lot to do with all this. And I am an eager lurker of all this Epigram. Scott Brickner wrote: lies in figuring out exactly how much can be done statically. For example, if one had decent naturals at the type level (ie naturals encoded not-in-unary) with efficient arithmetic AND a few standard decision procedures (for linear equalities and inequalities say), then most of the things that people currently claim need dependent types are either decidable or have very strong heuristics that "work" [1]. Jacques [1] @book{SymbolicAnalysis, author = {Thomas Fahringer and Bernhard Scholz}, title = {Advanced Symbolic Analysis for Compilers: New Techniques and Algorithms for Symbolic Program Analysis and Optimization}, publisher = pub-SV, series = {Lecture Notes in Computer Science}, volume = {2628}, year = {2003}, isbn = {3-540-01185-4} }

Jacques Carette wrote:
Yes, dependent types have a lot to do with all this. And I am an eager lurker of all this Epigram.
Would it be possible to augment Haskell's type system so that it was the same as that used in Epigram? Epigram itself uses a novel 2d layout and a novel way of writing programs (by creating a proof interactively) but these seem orthogonal to the actual type system itself. Also, typing is not the only issue for compile time guarantees. Consider: data Dir = Left | Right | Up | Down deriving Eq -- Compiler can check the function is total foo :: Dir -> String foo Left = "Horizontal" foo Right = "Horizontal" foo Up = "Vertical" foo Down = "Vertical" versus -- Less verbose but compiler can't look inside guards foo x | x == Left || x == Right = "Horizontal" foo x | x == Up || x == Down = "Vertical" versus something like: foo (Left || Right) = ... foo (Up || Down) = ... Brian. -- http://www.metamilk.com

It's possible to augment Haskell type system to be the one in Epigram. But it would no longer be Haskell. :) And to meet the goals of Epigram you'd also have to remove (unrestricted) recursion from Haskell. -- Lennart On Dec 21, 2006, at 23:46 , Brian Hulley wrote:
Jacques Carette wrote:
Yes, dependent types have a lot to do with all this. And I am an eager lurker of all this Epigram.
Would it be possible to augment Haskell's type system so that it was the same as that used in Epigram? Epigram itself uses a novel 2d layout and a novel way of writing programs (by creating a proof interactively) but these seem orthogonal to the actual type system itself.
Also, typing is not the only issue for compile time guarantees. Consider:
data Dir = Left | Right | Up | Down deriving Eq
-- Compiler can check the function is total foo :: Dir -> String foo Left = "Horizontal" foo Right = "Horizontal" foo Up = "Vertical" foo Down = "Vertical"
versus
-- Less verbose but compiler can't look inside guards foo x | x == Left || x == Right = "Horizontal" foo x | x == Up || x == Down = "Vertical"
versus something like:
foo (Left || Right) = ... foo (Up || Down) = ...
Brian. -- http://www.metamilk.com _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Brian Hulley wrote:
Would it be possible to augment Haskell's type system so that it was the same as that used in Epigram?
No, no! Epigram is a wonderfully pure research experiment in one corner of the design space. The corner it is exploring is not particularly "Haskell like", though the results of the exploration should bear fruit for Haskell now and then [and it already has]. While I am quite sure Haskell could do with more information in its types, proof requirements cannot be anywhere close to what it is in Epigram. I am convinced there is a "Haskell compatible" middle road.
Also, typing is not the only issue for compile time guarantees. Consider: [Example of coding enumeration as pattern-match vs guards vs ... deleted]
This is much more of an engineering issue than a theoretical issue. In some cases (explicit pattern match with no guards), coverage checking is trivial because the language you are dealing with is so simple. In other cases (guards), in general the problem is undecidable, but there are many many particular cases where there are applicable decision procedures. It seems to be a common choice amongst compiler writers to not wade into these waters -- although the people doing static analysis have been swimming there for quite some time. My feeling is that slowly increasing amounts of static analysis will be done by compilers (beyond just "types" or the current strictness analyses) to include these kinds of total/partial checks on guards, "shape" analysis, etc. It is already happening, the one question is what will be the speed at which it happens in Haskell. Maybe it is time for ICFP and SAS to be held together. Jacques

On Dec 21, 2006, at 5:03 PM, Jacques Carette wrote:
... What must be remembered is that full dependent types are NOT needed to get a lot of the benefits of dependent-like types. This is what some of Oleg's type gymnastics shows (and others too). My interest right now lies in figuring out exactly how much can be done statically. For example, if one had decent naturals at the type level (ie naturals encoded not-in-unary) with efficient arithmetic AND a few standard decision procedures (for linear equalities and inequalities say), then most of the things that people currently claim need dependent types are either decidable or have very strong heuristics that "work" [1].
My understanding is that BlueSpec did roughly this. As we're discovering in Fortress, type-level naturals are a big help; faking it really is horrible, as unary representations are unusable for real work and digital representations require a ton of stunts to get the constraints to solve in every direction (and they're still ugly). I for one would welcome a simple extension of Haskell with type-level nats (the implementor gets to decide if they're a new kind, or can interact with * somehow). -Jan-Willem Maessen [PS: hadn't seen the LNCS reference before, thanks to Jacques for sending that along.]

Yes, Bluespec has efficient type level naturals. But it only has arithmetic and some trivial decision procedures. The slogan is "the type checker knows arithmetic, not algebra". It worked pretty well. But you soon get into situations where you need polymorphic recursion of functions with type level naturals. It needs careful consideration (I never implemented that for Bluespec). -- Lennart On Dec 22, 2006, at 14:28 , Jan-Willem Maessen wrote:
On Dec 21, 2006, at 5:03 PM, Jacques Carette wrote:
... What must be remembered is that full dependent types are NOT needed to get a lot of the benefits of dependent-like types. This is what some of Oleg's type gymnastics shows (and others too). My interest right now lies in figuring out exactly how much can be done statically. For example, if one had decent naturals at the type level (ie naturals encoded not-in-unary) with efficient arithmetic AND a few standard decision procedures (for linear equalities and inequalities say), then most of the things that people currently claim need dependent types are either decidable or have very strong heuristics that "work" [1].
My understanding is that BlueSpec did roughly this. As we're discovering in Fortress, type-level naturals are a big help; faking it really is horrible, as unary representations are unusable for real work and digital representations require a ton of stunts to get the constraints to solve in every direction (and they're still ugly).
I for one would welcome a simple extension of Haskell with type- level nats (the implementor gets to decide if they're a new kind, or can interact with * somehow).
-Jan-Willem Maessen
[PS: hadn't seen the LNCS reference before, thanks to Jacques for sending that along.]
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Jacques Carette wrote:
Lennart Augustsson wrote:
I must second this opinion. There's this (false) perception that you need all kinds of extensions to make Haskell usable. It's simply not true. Certain extensions can make your life easier, but that's it.
To write code in Haskell, this is true.
However, one of the wonderful things about Haskell is how much the type system helps you. And if you want the type system to help you even more [which, as a programmer having suffered from dynamic typing too long, I really want], those extensions are really needed.
In other words, you can program in Haskell just fine without extensions. But if you want that "next level" in type safety, extensions is where it's at, at least for the kind of code I write.
Or, to go the other way, if you don't care about type safety, you might as well program in Javascript. -- ----- What part of "ph'nglui bglw'nafh Cthulhu R'lyeh wagn'nagl fhtagn" don't you understand?

Lennart Augustsson wrote:
On Dec 21, 2006, at 15:53 , Neil Mitchell wrote:
Hi
there are really no choices for real development. many libraries, language extensions and techniques are for ghc only
I develop everything in Hugs, including the Yhc compiler itself. Hugs is great. There are lots of extensions in GHC, but Haskell 98 is a perfectly useable language!
I must second this opinion. There's this (false) perception that you need all kinds of extensions to make Haskell usable. It's simply not true. Certain extensions can make your life easier, but that's it.
Well, FingerTrees for example, require MPTCs as in pushL :: Measured v a => a -> FingerTree v a -> FingerTree v a For any kind of object oriented programming you need existentials to separate the interface from the different concrete objects. Also in the code I've written I've often needed higher rank polymorphism and scoped type variables. Sure it's possible to use all kinds of horrific hacks to try and avoid the need for scoped type variables etc but why would anyone waste their time writing difficult obfuscated code when GHC has the perfect solution all ready to use. In other words, what on earth is good about gluing oneself to Haskell98? Life's moved on... Best regards, Brian. -- http://www.metamilk.com

Hi
In other words, what on earth is good about gluing oneself to Haskell98? Life's moved on...
If you stick to Haskell 98 you can: * Convert your code to Clean (Hacle) * Debug it (Hat) * Run it in your browser (ycr2js) * Document it (Haddock) * Make a cross platform binary (yhc) * Get automatic suggestions (Dr Haskell) ... Sometimes you need a type extension, but if you don't, you do get benefits. Thanks Neil

Neil Mitchell wrote:
Hi
In other words, what on earth is good about gluing oneself to Haskell98? Life's moved on...
If you stick to Haskell 98 you can:
* Convert your code to Clean (Hacle) * Debug it (Hat) * Run it in your browser (ycr2js) * Document it (Haddock) * Make a cross platform binary (yhc) * Get automatic suggestions (Dr Haskell) ...
Sometimes you need a type extension, but if you don't, you do get benefits.
True, though it would be even better if the "usual" extensions were more widely supported, though I suppose identifying what's useful and therefore worth supporting is the point of the Haskell Prime process. As an aside I've often thought it would be better if the various components of Haskell compilers/tools would be separated out so that people could effectively build their own compiler tailored more specifically for their needs. Ie lots of smaller projects each dealing with a particular phase of Haskell processing which would be joined together by standard APIs, so that someone could use the latest type system extensions with whole program optimization while someone else could use those same type extensions with a back end designed for graphical debugging etc, and also so that people just interested in developing whole program optimization (for example) wouldn't have to reinvent the ever-more-difficult wheel of lexing/parsing/typechecking/targeting multiple platforms... Best regards, Brian. -- http://www.metamilk.com

Hi
True, though it would be even better if the "usual" extensions were more widely supported, though I suppose identifying what's useful and therefore worth supporting is the point of the Haskell Prime process.
Exactly the reason for Haskell Prime.
As an aside I've often thought it would be better if the various components of Haskell compilers/tools would be separated out so that people could effectively build their own compiler tailored more specifically for their needs.
http://neilmitchell.blogspot.com/2006/12/bhc-basic-haskell-compiler.html I thought the same thing. Note that Yhc already has a lightweight API for manipulating Yhc.Core files and one for Yhc.Bytecode files. Things are moving in that direction slowly. There is also the GHC API approach. Thanks Neil

I have skimmed the serialization libraries on haskell.org (NewBinary, SerTH, AltBinary, HsSyck, GenericSerialize). I'm under the impression that these all force the data that they serialize. Is that correct? If yes: are there workarounds? I'd really like to be able to use infinite data structures in the data that I serialize. Regards, Jo

Hi Joachim,
All those libraries really force the data because they all are written
in Haskell. If you want to serialize thunks then you will need some
support from RTS. This is something that is implemented in Clean but
this just uncovers a lot of other problems:
The serialization of thunk requires to store a reference to procedure
for thunk evaluation. After that it is tricky to do the deseriazation
because you will have to dynamically link the thunk evaluation code.
The program that do the serialization and the program that have to
read it aren't necessary one and the same program. In this case the
reading application should have some way to call code from the writing
application.
Another problem arises when you have to recompile the writing
application. After the recompilation the evaluation code for some
thunk may not exist any more. The solution that Clean is using is to
keep a copy of each executable after each compilation.
Cheers,
Krasimir
On 12/21/06, Joachim Durchholz
I have skimmed the serialization libraries on haskell.org (NewBinary, SerTH, AltBinary, HsSyck, GenericSerialize).
I'm under the impression that these all force the data that they serialize. Is that correct? If yes: are there workarounds? I'd really like to be able to use infinite data structures in the data that I serialize.
Regards, Jo
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Krasimir Angelov schrieb:
All those libraries really force the data because they all are written in Haskell. If you want to serialize thunks then you will need some support from RTS.
Good to hear that my conjectures aren't too far from reality. Does any Haskell implementation have that kind of RTS support?
The serialization of thunk requires to store a reference to procedure for thunk evaluation. After that it is tricky to do the deseriazation because you will have to dynamically link the thunk evaluation code.
Actually, I'd be happy if the deserializing program were required to contain the same code that serialized the thunk. Technically, I think that could be achieved by generating checksums over a suitably normalized version of the source code. Or, for a bytecode-based RTS, one could simply store the bytecode for each thunk (with proper sharing, of course).
The program that do the serialization and the program that have to read it aren't necessary one and the same program.
Indeed. There is little need to change data structures and have versioning unless the code that uses the data changes.
In this case the reading application should have some way to call code from the writing application.
Yup. It may be kept in the old executable, or stored in the marshalled data (I think that's what Mozart/Oz is doing).
Another problem arises when you have to recompile the writing application. After the recompilation the evaluation code for some thunk may not exist any more. The solution that Clean is using is to keep a copy of each executable after each compilation.
I wouldn't mind if I had to explicitly manage old versions of the code. I'll never have data that's more than one version old, and that can be managed using a simple script. Regards, Jo

On 12/21/06, Joachim Durchholz
Krasimir Angelov schrieb:
All those libraries really force the data because they all are written in Haskell. If you want to serialize thunks then you will need some support from RTS.
Good to hear that my conjectures aren't too far from reality.
Does any Haskell implementation have that kind of RTS support?
Not yet. I ever don't know of anyone planning to do that. Cheers, Krasimir

Hi
All those libraries really force the data because they all are written in Haskell. If you want to serialize thunks then you will need some support from RTS.
Good to hear that my conjectures aren't too far from reality.
Does any Haskell implementation have that kind of RTS support?
Not yet. I ever don't know of anyone planning to do that.
Actually I think Yhc has exactly this kind of support, and you can even ship these thunks over the wire and reconstruct them on a different machine :) http://darcs.haskell.org/yhc/src/packages/yhc-base-1.0/YHC/Runtime/API.hs Poorly documented, but it does work. Use at your own risk! Thanks Neil

On Wed, Dec 20, 2006 at 11:03:42PM +0100, Joachim Durchholz wrote:
If yes: are there workarounds? I'd really like to be able to use infinite data structures in the data that I serialize.
There is an interesting technique thay allows you to serialize infinite, lazy or functional values: don't try to describe how those values look, but how they came to be. In a purely functional language it's easy to guarantee that you will get the same value each time you "replay" it's creation. Those who used WASH/CGI should know what I mean. Best regards Tomasz

Tomasz Zielonka schrieb:
On Wed, Dec 20, 2006 at 11:03:42PM +0100, Joachim Durchholz wrote:
If yes: are there workarounds? I'd really like to be able to use infinite data structures in the data that I serialize.
There is an interesting technique thay allows you to serialize infinite, lazy or functional values: don't try to describe how those values look, but how they came to be.
Ah, that's an interesting approach that I haven't thought of. What are its limits? Does it impose design constraints? Regards, Jo

Hello Joachim, Friday, December 22, 2006, 2:30:32 AM, you wrote:
There is an interesting technique thay allows you to serialize infinite, lazy or functional values: don't try to describe how those values look, but how they came to be.
Ah, that's an interesting approach that I haven't thought of.
i'm don't sure that Tomasz means, but at least me imagined smth like the following: data Struct = Struct { a::Int, b::Int } c (Struct a b) = (a*) i.e. datastructure contains only non-functional fields which includes all necessary arguments for virtual functional fields -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

OK, just to let everybody know why I'm dropping Haskell. Basically, the reasoning is this: * I want to write a process that doesn't terminate. * Since the environment can and will enforce termination occasionally, the process must be able to write its state to some external storage ("serialize it"; flat file or database doesn't make much of a difference). In fact I'll want to serialize most of the program's state on a regular basis (say, once per day), just to safeguard against bugs and hardware crashes. * In all Haskell implementations (with the exception of Yhc), there is no way to write out data without forcing unevaluated expressions inside. (Yhc isn't mature yet, so it's not an option currently.) * Forcing the expressions that get written out means that I cannot use lazy evaluation freely. In particular, if some library code returns a data structure that contains a lazy-infinite subexpression, serializing it would not terminate, and I'd have little chance of fixing that. Now that the first serialization will destroy many of the advantages of laziness, I don't think I should try to wrestle with its disadvantages. I'll move on to the alternatives - Alice ML and/or Clean. Both can serialize without forcing lazy subexpressions. This all said, I still don't think that Haskell or its execution environments are bad. They just don't fit my requirements, which are A) I want to get my feet wet with an FPL (seriously, this time), and B) I want to do a webserver application (I know the domain). Regards, Jo

On Fri, Dec 22, 2006 at 06:16:03PM +0100, Joachim Durchholz wrote:
* Forcing the expressions that get written out means that I cannot use lazy evaluation freely. In particular, if some library code returns a data structure that contains a lazy-infinite subexpression, serializing it would not terminate, and I'd have little chance of fixing that.
I fear that you may not have a good intuition about laziness and how it is used in practical programming at this moment. Why not try and see if there really is a problem?
Now that the first serialization will destroy many of the advantages of laziness, I don't think I should try to wrestle with its disadvantages.
A common problem with laziness is the possibility of introducing space leaks. That's when the program accumulates a big thunk which, if forced, would reduce to a small and simple value. "Nobody" is interested in this value now, so it is kept in a lazy, space inefficient form. A classic example is a thunk like this (1+1+1+1+1+1+...) One solution to this problem is "deepSeq" - forcing full evaluation of a data structure. Naive serialisation performs this as a side-effect. What I want to tell is that the "problem" you see here could just as well be a *solution* to a different problem that you haven't considered yet - space leaks!
I'll move on to the alternatives - Alice ML and/or Clean. Both can serialize without forcing lazy subexpressions.
I am pretty sure that with those solutions you will meet other problems of the same caliber. What I propose is the following: keep Haskell as a possibility (with one discovered problem), consider problems with those other languages, then decide. There doesn't seem to be a perfect programming language.
This all said, I still don't think that Haskell or its execution environments are bad. They just don't fit my requirements, which are A) I want to get my feet wet with an FPL (seriously, this time), and B) I want to do a webserver application (I know the domain).
Haskell would play well with those requirements. I've created some web applications in it, and I was delighted with the things I learned in the process. I am starting to suspect that you have a third requirement you haven't told us about, like: C) I want to make profit ;-) Best regards Tomasz

Tomasz Zielonka schrieb:
On Fri, Dec 22, 2006 at 06:16:03PM +0100, Joachim Durchholz wrote:
* Forcing the expressions that get written out means that I cannot use lazy evaluation freely. In particular, if some library code returns a data structure that contains a lazy-infinite subexpression, serializing it would not terminate, and I'd have little chance of fixing that.
I fear that you may not have a good intuition about laziness and how it is used in practical programming at this moment. Why not try and see if there really is a problem?
Because I might end up having sunk several weeks of my time, just to find that there is a problem anyway. Second reason: the restriction will warp my design style. I'll avoid putting infinite data structures in the game data - and that means I can't explore open-end strategies, I'll have to "program around" the issue. For this reason, I'd rather wait until Haskell serializes thunks as a matter of course, and explore lazy programming fully, rather than trying it out now and have my style warped.
Now that the first serialization will destroy many of the advantages of laziness, I don't think I should try to wrestle with its disadvantages.
A common problem with laziness is the possibility of introducing space leaks. That's when the program accumulates a big thunk which, if forced, would reduce to a small and simple value. "Nobody" is interested in this value now, so it is kept in a lazy, space inefficient form. A classic example is a thunk like this (1+1+1+1+1+1+...)
One solution to this problem is "deepSeq" - forcing full evaluation of a data structure. Naive serialisation performs this as a side-effect.
What I want to tell is that the "problem" you see here could just as well be a *solution* to a different problem that you haven't considered yet - space leaks!
I'm aware of this kind of problem. However, hailing deepSeq as a solution before I even encounter the problem is just constraining the design space. Besides, why use a lazy language at all if I can't use laziness anyway? Most of the data will be in the server, which will get serialized at some time.
I'll move on to the alternatives - Alice ML and/or Clean. Both can serialize without forcing lazy subexpressions.
I am pretty sure that with those solutions you will meet other problems of the same caliber.
No FUD, please ;-) And yes I know there are devils lurking in every language and environment. I'm pretty sure that Haskell has a few others to offer, too. (There's still no good introduction to Monads, for example. One that's understandable for a programmer who knows his Dijkstra well but no category theory. And a few other things.)
What I propose is the following: keep Haskell as a possibility (with one discovered problem), consider problems with those other languages, then decide.
Exactly. The decision I made right now is just to explore another language. I'll be back and trying out Haskell after a while - though that probably won't before serialization didn't get serious.
There doesn't seem to be a perfect programming language.
This all said, I still don't think that Haskell or its execution environments are bad. They just don't fit my requirements, which are A) I want to get my feet wet with an FPL (seriously, this time), and B) I want to do a webserver application (I know the domain).
Haskell would play well with those requirements. I've created some web applications in it, and I was delighted with the things I learned in the process.
I am starting to suspect that you have a third requirement you haven't told us about, like: C) I want to make profit ;-)
I won't deny that profit is on the agenda, but it's taking a back seat during the evaluation phase. The situation is slightly different: I'm sick of PHP and want to make profit with something better ;-) However, I admit there are additional requirements, too. I just didn't want to add too much noise explaining why I chose exactly that project for exactly this language. The real hidden agenda is that I'd like to have to option to move away from browser-based stuff, toward client-server stuff. The ability to serialize arbitrary data between applications would be almost indispensable - you can program around the restriction, but then you get impedance mismatches *between Haskell applications*. Regards, Jo

Joachim Durchholz wrote:
I'll move on to the alternatives - Alice ML and/or Clean. Both can serialize without forcing lazy subexpressions.
I don't know about Clean, but with respect to Alice ML this is not correct: Alice ML uniformly blocks on futures upon pickling, including lazy ones. Sometimes you may want to pickle lazy suspensions as such, but more often it is not what you want. In particular, the suspension can easily be larger than its result, and the closure may contain resources which cannot be pickled. If such a suspension was produced by an abstraction you may even argue that making contained resources observable partially breaks the abstraction. To adhere to uniformity, strong abstraction, and the Principle of Least Surprise, we thus chose to force lazy futures in Alice ML.
No FUD, please ;-)
And yes I know there are devils lurking in every language and environment. I'm pretty sure that Haskell has a few others to offer, too. (There's still no good introduction to Monads, for example. One that's understandable for a programmer who knows his Dijkstra well but no category theory. And a few other things.)
No FUD, please ;-) ;-) Cheers, - Andreas

rossberg@ps.uni-sb.de schrieb:
Joachim Durchholz wrote:
I'll move on to the alternatives - Alice ML and/or Clean. Both can serialize without forcing lazy subexpressions.
I don't know about Clean, but with respect to Alice ML this is not correct: Alice ML uniformly blocks on futures upon pickling, including lazy ones.
Bad enough, but not necessarily a showstopper: lazy data structures aren't idiomatic in ML (I just hope they aren't in Alice ML either).
Sometimes you may want to pickle lazy suspensions as such, but more often it is not what you want. In particular, the suspension can easily be larger than its result,
I'd say that in this case, the suspension should have been forced earlier. I.e. the problem is not in the pickling but in the suspension being larger than its result. I'm entirely unsure how to determine when a suspension should be forced anway. The programmer could give hints, but that would probably break all kinds of abstraction barrier; and the system usually doesn't have enough information to decide when it should do it. Seems like one of those problems that generate lots of PhD papers...
and the closure may contain resources which cannot be pickled. If such a suspension was produced by an abstraction you may even argue that making contained resources observable partially breaks the abstraction.
Well, Alice ML doesn't serialize things that don't make sense outside the originating process (i.e. resources). That's better than silently serializing a file handle and being surprised by that handle meaning an entirely different file after unpickling. OTOH one could do even better. Simply unpickle a resource as a proxy for the original resource in the originating process. (If the process has died, assume that the resource was closed - the vast majority of resource types has a "closed/unusable" state anyway.)
To adhere to uniformity, strong abstraction, and the Principle of Least Surprise, we thus chose to force lazy futures in Alice ML.
Well, I wouldn't have expected that pickling has an effect (other than wrapping the value up for transfer), so at least I would have been greatly surprised. I even dislike to see something like that in a language that has side effects. But you can't have everything anyway ;-) Regards, Jo

Joachim Durchholz wrote:
To adhere to uniformity, strong abstraction, and the Principle of Least Surprise, we thus chose to force lazy futures in Alice ML.
Well, I wouldn't have expected that pickling has an effect (other than wrapping the value up for transfer), so at least I would have been greatly surprised.
That "effect" is just strictness. Since you generally cannot pickle a future (and usually wouldn't want to), pickling naturally has to be strict. Making lazy futures a special case, in this one single strict operation, would be surprising (and extremely ad-hoc). - Andreas

Hi
(There's still no good introduction to Monads, for example. One that's understandable for a programmer who knows his Dijkstra well but no category theory. And a few other things.)
I grasped this one first time round: http://haskell.org/haskellwiki/Monads_as_containers No category theory. A basic understanding of apples and boxes is all that is required. http://haskell.org/haskellwiki/Monad has a list of about 10, plus pretty much every book/general tutorial introduces Monad's as well. If there is really nothing out there that helps you understand, you might want to prod some authors as to what isn't clear/understandable. And Haskell let's you serialise thunks just fine (use Yhc), but really if thats what you want to do, I think you are going to suffer a lot in the long run... I have written client/server app's before, the whole point is you want to decide what computation gets performed on which side, not leave it up to the vagaries of lazy evaluation to come up with a random solution. Haskell is fun, Haskell is good, but its not the right answer to every question. Just have fun doing whatever you do :) If at the end you wrote up a little summary of your project, what you used, how it went, what issues you ran into - then perhaps people can tell you (with the benefit of hindsight) how Haskell might have been. Thanks Neil

Neil Mitchell schrieb:
Hi
(There's still no good introduction to Monads, for example. One that's understandable for a programmer who knows his Dijkstra well but no category theory. And a few other things.)
I grasped this one first time round: http://haskell.org/haskellwiki/Monads_as_containers
That one escaped me. Thanks.
And Haskell let's you serialise thunks just fine (use Yhc),
Yhc explicitly say "not production-ready" on the main page. I hope to return to it when that notice is gone.
but really if thats what you want to do, I think you are going to suffer a lot in the long run... I have written client/server app's before, the whole point is you want to decide what computation gets performed on which side, not leave it up to the vagaries of lazy evaluation to come up with a random solution.
I'll indeed want to keep quite a strict tab on what gets computed where. However, I can foresee the time when I explicitly want a thunk passed around. And I need it now already for making snapshots of the simulation's state, which will most likely contain thunks (e.g. I like to work in infinite game universes, where the data "springs into existence" at the time when it's being looked at; with lazy evaluation, I could express that in a straightforward fashion as an infinite map from coordinates to game data, and without it, I have to use other techniques).
Haskell is fun, Haskell is good, but its not the right answer to every question. Just have fun doing whatever you do :)
Thanks - I hope it will work out that way. I have seen far too many fun projects turn into bad ones... (might as well have been my own fault, of course - we'll see).
If at the end you wrote up a little summary of your project, what you used, how it went, what issues you ran into - then perhaps people can tell you (with the benefit of hindsight) how Haskell might have been.
Um... I think you'll be able to tell which problems would have been non-problems in Haskell, but you probably won't be able to tell which non-problems would have been problems in Haskell. (I tend to do things differently than what's the established standard way of doing things. Which means I'm usually both very innovative and very frustrated...) Regards, Jo
participants (17)
-
Brian Hulley
-
Bulat Ziganshin
-
dons@cse.unsw.edu.au
-
Jacques Carette
-
Jan-Willem Maessen
-
Joachim Durchholz
-
Krasimir Angelov
-
Lemmih
-
Lennart Augustsson
-
Marc A. Ziegert
-
Neil Mitchell
-
Robert Dockins
-
Ross Paterson
-
rossberg@ps.uni-sb.de
-
Scott Brickner
-
Simon Peyton-Jones
-
Tomasz Zielonka