Re: Learning about Programming Languages (specifically Haskell)

Also, one more thing - if someone could write some comments to go along with the source code that explain what it is doing, that would be really helpful. I can see the general structure, but I don't know the ins and outs of Haskell. If someone could augment the example with comments explaining what the functions do that would be great! data Door = Open | Closed deriving Show toggle Open = Closed toggle Closed = Open pass k = zipWith ($) (cycle $ replicate k id ++ [toggle]) run n = foldl (flip pass) (replicate n Closed) [0..n] Do I need to add run 100 to the end of the example for it to actually do something?

If you are running from GHCi, just type run 100 at the prompt.. If you intend to compile it, you have to add main = print $ run 100 The compiler adds a call to main::IO (), which is intended to be the main entry point of your code. We need to add print, as run has type run::Int->[Door] so run 100 has type [Door]. print is print::(Show a) => a -> IO () The IO () stands for an empty IO monad, which is the black magic of haskell, intended to separate pure code from I/O side-effects... On Mon, May 3, 2010 at 06:31, Samuel Williams < space.ship.traveller@gmail.com> wrote:
Also, one more thing - if someone could write some comments to go along with the source code that explain what it is doing, that would be really helpful. I can see the general structure, but I don't know the ins and outs of Haskell. If someone could augment the example with comments explaining what the functions do that would be great!
data *Door* = *Open* | *Closed* deriving *Show*
toggle *Open* = *Closed* toggle *Closed* = *Open*
pass k = zipWith ($) (cycle $ replicate k *id* ++ [toggle])
run n = foldl (flip pass) (replicate n *Closed*) [0..n]
Do I need to add run 100 to the end of the example for it to actually do something?
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Rafael Gustavo da Cunha Pereira Pinto

Reasons to learn Haskell include: Lazy evaluation can make some kinds of algorithms possible to implement that aren't possible to implement in other languages (without modification to the algorithm). Strict type system allows for a maximum number of programming errors to be caught at compile time. Functional design makes designing for concurrency and parallelism simpler than in procedural or OO designs. Clear differentiation of impure code helps to isolate points of failure and to make the programmer more aware of possible side effects. Knowledge of high order functions can provide the programmer with unique ways of solving certain problems, even in non-functional languages. Excellent Foreign Function Interface makes integration with existing libraries relatively painless (compared with many other languages). There are probably other reasons I'm not even aware of as I'm still a beginner myself, but to me those are some of the most important. The program you provided seems like a poor example to demonstrate the language as it uses some very hard to follow logic for someone not familiar with the language. Furthmore the results can not be easily explained nor reasoned about simply (mostly due to the repeated application of id and toggle instances to the same list as foldl runs). Nevertheless I've attempted to provide what comments I can, although I'm sure someone can do better than me and I might have made a mistake somewhere. --- Begin Code --- {- This declares a new type, Door, two new constructors for the Door type, Open, and Closed, and tells the compiler to make Door an instance of the Show class which provides the function show which can be used to convert something into a String. I.E. show :: Door -> String -} data *Door* = *Open* | *Closed* deriving *Show ** * toggle :: Door -> Door toggle *Open* = *Closed *-- New function to convert a Open Door to a Close Door. toggle *Closed* = *Open* {- I broke this line down for easier understanding. This line takes two lists and combines them using the ($) operator. The first list is provided by converter, the second list is implicit as can be seen by the function signature provided below and consists of a list of Doors. In other words, it takes the function in the converter list, and applies it to the Door from the last argument to the function to produce a new list of Door objects. The list this produces can be thought of as looking something like the following when called with 3 for example: [(id $ Door), (id $ Door), (id $ Door), (toggle $ Door), (id $ Door), (id $ Door)...] -} pass :: Int -> [Door] -> [Door] pass k = zipWith ($) converter where converter :: [Door -> Door] {- This produces a list of functions from Door to Door, it produces k id functions, one toggle function, and then repeats. id just returns whatever it's given. -} converter = cycle $ replicate k *id* ++ [toggle] {- this creates two lists, one n long of Closed instances, and one from 0 to n. flip pass reverses the order of arguments to pass so that instead of taking a number and a list of Doors it instead takes a list of Doors and a number. foldl takes one number from the list, the list of Closed Door instances if this is the first time through, or the result of the last run, and passes them both to pass. -} run :: Int -> [Door] run n = foldl (flip pass) (replicate n *Closed*) [0..n] main = print $ run 100 --- End Code --- -R. Kyle Murphy -- Curiosity was framed, Ignorance killed the cat. On Mon, May 3, 2010 at 06:43, Rafael Gustavo da Cunha Pereira Pinto < RafaelGCPP.Linux@gmail.com> wrote:
If you are running from GHCi, just type run 100 at the prompt..
If you intend to compile it, you have to add
main = print $ run 100
The compiler adds a call to main::IO (), which is intended to be the main entry point of your code.
We need to add print, as run has type
run::Int->[Door]
so run 100 has type [Door].
print is
print::(Show a) => a -> IO ()
The IO () stands for an empty IO monad, which is the black magic of haskell, intended to separate pure code from I/O side-effects...
On Mon, May 3, 2010 at 06:31, Samuel Williams < space.ship.traveller@gmail.com> wrote:
Also, one more thing - if someone could write some comments to go along with the source code that explain what it is doing, that would be really helpful. I can see the general structure, but I don't know the ins and outs of Haskell. If someone could augment the example with comments explaining what the functions do that would be great!
data *Door* = *Open* | *Closed* deriving *Show*
toggle *Open* = *Closed* toggle *Closed* = *Open*
pass k = zipWith ($) (cycle $ replicate k *id* ++ [toggle])
run n = foldl (flip pass) (replicate n *Closed*) [0..n]
Do I need to add run 100 to the end of the example for it to actually do something?
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Rafael Gustavo da Cunha Pereira Pinto
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Mon, May 3, 2010 at 9:17 AM, Kyle Murphy
Reasons to learn Haskell include: Lazy evaluation can make some kinds of algorithms possible to implement that aren't possible to implement in other languages (without modification to the algorithm).
One could say the reverse as well. I would say that laziness allows many more *compositions* of algorithms. Many more objects can be described from simple building blocks (combinators).

Strict type system allows for a maximum number of programming errors to be caught at compile time.
I keep hearing this statement but others would argue that programming errors caught at compile time only form a minor subset of all errors caught. So, in functional programming languages with a strict type system, e.g. Haskell, do typing errors from a larger subset of all programming errors. I don't mean "tpynig errros". :) -- Regards, Casey

On Mon, May 3, 2010 at 11:34 AM, Casey Hawthorne
Strict type system allows for a maximum number of programming errors to be caught at compile time.
I keep hearing this statement but others would argue that programming errors caught at compile time only form a minor subset of all errors caught.
So, in functional programming languages with a strict type system, e.g. Haskell, do typing errors from a larger subset of all programming errors.
Yes. You can usually write much the same program with and without static types. The difference will be that in the dynamic typing/untyped setting you'll get the errors at runtime, and in the statically typed setting you'll get (many of) the errors at compile time. You can express a lot of invariants with types once you become familiar with their usage. For instance you can make a 2-3 tree in Haskell where the types enforce the fact that the tree is balanced. You can implement the same code in Scheme or another dynamically typed language, but the result will lack that extra guarantee. Nothing says that your insert operation drops the node in the right place, etc, but at last you know you are safe from one class of bugs. Types effectively prove lots of little boring theorems about your code as you program. Inconsistent code will often cause these sorts of little proofs to fail. One thing that thinking about types helps you to do is to figure out if given the types if the choice of implementation is more-or-less unique. For instance given the type for fmap, and the extra law(s) you need to satisfy, you really only have one correct implementation option. ;) So not only do you get benefit from the compiler giving you errors rather than finding out later when your runtime system blows up in production, but the types help inform you as to what the shape of a correct implementation should be. -Edward Kmett

Are you really sure about that... it might cause a typing error if you misspell something. Proposal: "The double typing error" Kind regards, Samuel On 4/05/2010, at 3:34 AM, Casey Hawthorne wrote:
I don't mean "tpynig errros".

On Mon, May 3, 2010 at 9:34 AM, Casey Hawthorne
Strict type system allows for a maximum number of programming errors to be caught at compile time.
I keep hearing this statement but others would argue that programming errors caught at compile time only form a minor subset of all errors caught.
So, in functional programming languages with a strict type system, e.g. Haskell, do typing errors from a larger subset of all programming errors.
Absolutely! Haskell developers trade debugging time for time arguing with the compiler about the correctness of their code. I'll give this meaningless anecdotal statistic: Compiler says my code is right => My code is actually right -- 60% Compiler says my code is wrong => My code is actually wrong -- 95% Haskell has a particular reputation for the former. Luke

The problem with dynamic typing is that it has a much higher chance of
having a subtle error creep into your code that can go undetected for a long
period of time. A strong type system forces the code to fail early where
it's easier to track down and fix the problem, rather than trying to perform
debugging on the fly in a production system. This has an added advantage for
compiled languages in that for many non-trivial applications the time to
build and deploy a new instance of the program, even in the development
environment is often substantial, and the more trivial errors are discovered
at compile time, the less time is wasted on builds.
For small code bases the time spent tracking down a error at runtime might
be less than the time spent making your code conform to strict type
requirements, but for larger code bases the amount of time necessary to
track down such errors greatly out weighs the amount of time needed to make
your code well typed.
To look at the flip side of your statistics:
Compiler says my code is right => My code is actually wrong -- 40%
Compiler says my code is wrong => My code is actually right -- 5%
I'd argue this is provably wrong, as correct code by definition would
compile. The fact that it doesn't is proof enough that there's a problem
with it even if that problem is simply that the types you're using aren't
exactly correct. Further, I'd argue that in the first instance with a
non-strict type system, the instance of wrong code that compiles would be
higher. The only argument to support non-strict typing would be if you could
show that it takes less time to track down runtime bugs than it does to fix
compile time type errors, and any such claim I'd be highly skeptical of.
-R. Kyle Murphy
--
Curiosity was framed, Ignorance killed the cat.
On Mon, May 3, 2010 at 12:00, Luke Palmer
On Mon, May 3, 2010 at 9:34 AM, Casey Hawthorne
wrote: Strict type system allows for a maximum number of programming errors to be caught at compile time.
I keep hearing this statement but others would argue that programming errors caught at compile time only form a minor subset of all errors caught.
So, in functional programming languages with a strict type system, e.g. Haskell, do typing errors from a larger subset of all programming errors.
Absolutely! Haskell developers trade debugging time for time arguing with the compiler about the correctness of their code.
I'll give this meaningless anecdotal statistic:
Compiler says my code is right => My code is actually right -- 60% Compiler says my code is wrong => My code is actually wrong -- 95%
Haskell has a particular reputation for the former.
Luke _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

In my opinion code is 'right' when it conforms to the specification. Haskell's type system allows the programmer to express a part of the specification in the types, which then get checked by the compiler/type-checker. This is where I see the biggest benefit of a very expressive statically checked type system.

On May 4, 2010, at 5:07 AM, Kyle Murphy wrote:
To look at the flip side of your statistics: Compiler says my code is right => My code is actually wrong -- 40% Compiler says my code is wrong => My code is actually right -- 5%
I'd argue this is provably wrong, as correct code by definition would compile.
Only for some definitions of "correct". It was clearly explained in the original paper introducing Hindley-Milner type checking that some *correct* programs (in the sense that they would not "go wrong" at run time and would deliver the intended results) would be rejected by such a type checker. To give a Haskell example, main = print $ (head [1,'x'] + 1) cannot possibly go wrong, yet Haskell will reject it. But that's not the argument for dynamic types. The argument for dynamic types is adaptability. This is why Brad Cox designed Objective C the way he did, KNOWING that there was a price in efficiency and a price in static checking: to write programs that could be extended more easily. This is why Joe Armstrong designed Erlang the way he did, KNOWING that there was a price in efficiency and a price in static checking: so that systems could dynamically load new modules and even replace existing ones, so that systems could be upgraded without being shut down. (The Smalltalk term for this is "changing the engine while driving down the highway at 60mph", but sometimes shutdowns really are a very bad thing.) It's interesting that after Brad Cox's original design, Objective C was extended in a way that permitted more type checking. It's interesting that Erlang, after several experiments, has finally got a type system that seems to have stuck (possibly because it serves three purposes: documentation in ErlDoc, performance in the native code compiler HiPE, and static checking in the Dialyzer), but it remains optional. For about a year I kept a log of errors in the Smalltalk code I was writing, and type errors were in fact quite rare. Errors due to partial functions being applied to arguments outside their domain were more common. When people have this argument, they are using thinking in terms of type systems like (say) Java. A type system that only stops you doing bad things is useful, but it's hard to love. The great thing about the Haskell type system is that it does far more than that. It's at least plausible that the Haskell type system reduces errors by reducing the amount of code you have to write. To give just one example: the original QuickCheck for Haskell was admittedly a brilliant and innovative idea, but the actual *code* needed to make it happen wasn't that bulky, or even that hard to understand given the core idea: that random data generation could be driven by the types. In contrast, the QuickCheck that has been written for Erlang is a commercial product in which we are advised in strong terms that the "generators" for random data are NOT types, and writing new ones is, while not *too* complicated, something the programmer always has to do instead of sometimes letting it be automatic. Various kinds of generic programming for Haskell take this idea of letting the types drive automatic programming still further. ML types => theorems for free; Haskell types => code for free!

On Mon, May 3, 2010 at 11:07 AM, Kyle Murphy
The problem with dynamic typing is that it has a much higher chance of having a subtle error creep into your code that can go undetected for a long period of time. A strong type system forces the code to fail early where it's easier to track down and fix the problem, rather than trying to perform debugging on the fly in a production system. This has an added advantage for compiled languages in that for many non-trivial applications the time to build and deploy a new instance of the program, even in the development environment is often substantial, and the more trivial errors are discovered at compile time, the less time is wasted on builds.
For small code bases the time spent tracking down a error at runtime might be less than the time spent making your code conform to strict type requirements, but for larger code bases the amount of time necessary to track down such errors greatly out weighs the amount of time needed to make your code well typed.
To look at the flip side of your statistics: Compiler says my code is right => My code is actually wrong -- 40% Compiler says my code is wrong => My code is actually right -- 5%
I'd argue this is provably wrong, as correct code by definition would compile.
Here is a contrived example of what I am referring to: prefac f 0 = 1 prefac f n = n * f (n-1) fac = (\x -> x x) (\x -> prefac (x x)) If this code were allowed to compile (say by inserting unsafeCoerce anywhere you like), it would correctly implement a factorial function. It is precisely these cases behind the dynamically typed languages' advocacy: my code is right but I can't (or it is too much work to) convince the compiler of that fact. It is a pretty bold statement to say that these do not occur.
The fact that it doesn't is proof enough that there's a problem with it even if that problem is simply that the types you're using aren't exactly correct. Further, I'd argue that in the first instance with a non-strict type system, the instance of wrong code that compiles would be higher. The only argument to support non-strict typing would be if you could show that it takes less time to track down runtime bugs than it does to fix compile time type errors, and any such claim I'd be highly skeptical of.
Clearly. But many people believe in this methodology, and use test suites and code coverage instead of types. Indeed, such practices are essentially "empirical type checking", and they afford the advantage that their verification is much more expressive (however less reliable) than our static type system, because they may use arbitrary code to express their predicates. What I seem to be getting at is this plane of type systems: Constrained ------------------------- Expressive Unreliable | (C) | (test suites) | (C++) . | . | (Java/C#) (Scala) . | . | . | (Haskell) . | | (Agda) Reliable Where by Constrained/Expressive I mean the ability for the system to express properties *about the code* (so C++'s type system being turing complete is irrelevant). By Unreliable/Reliable I mean, given popular engineering practice in that language, the chance that if it passes the checks then it works as intended. For all the languages, I mean their compilers. Test suites extend down the right-hand side, depending on how rigorous you are about testing, but they never get as far down as Agda. :-) Luke

lrpalmer:
What I seem to be getting at is this plane of type systems:
Constrained ------------------------- Expressive Unreliable | (C) | (test suites) | (C++) . | . | (Java/C#) (Scala) . | . | . | (Haskell) . | | (Agda) Reliable
Where have I seen this before.... oh!! http://i.imgur.com/srLvr.jpg The Big Lebowski Alignment Chart, mirrored!! +------------------------------------------------ | Lawful Good Chaotic Good | (Agda) (C++) | | | True Netural | (Haskell) | | | | Lawful Evil Chaotic Evil | (testsuites) (C) +------------------------------------------------ | | | |

On 4 May 2010 13:30, Luke Palmer
Here is a contrived example of what I am referring to:
prefac f 0 = 1 prefac f n = n * f (n-1)
fac = (\x -> x x) (\x -> prefac (x x))
I can't work out how this works (or should work rather); is it meant to be using church numerals or something (assuming that they have been made an instance of Num so that - and * work)? -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

prefac is just a normal factorial function with recursion factored out. fix
prefac 5 gives 120, for example.
On Tue, May 4, 2010 at 12:13 AM, Ivan Miljenovic
On 4 May 2010 13:30, Luke Palmer
wrote: Here is a contrived example of what I am referring to:
prefac f 0 = 1 prefac f n = n * f (n-1)
fac = (\x -> x x) (\x -> prefac (x x))
I can't work out how this works (or should work rather); is it meant to be using church numerals or something (assuming that they have been made an instance of Num so that - and * work)?
-- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, May 4, 2010 at 12:13 AM, Ivan Miljenovic
On 4 May 2010 13:30, Luke Palmer
wrote: Here is a contrived example of what I am referring to:
prefac f 0 = 1 prefac f n = n * f (n-1)
fac = (\x -> x x) (\x -> prefac (x x))
I can't work out how this works (or should work rather); is it meant to be using church numerals or something (assuming that they have been made an instance of Num so that - and * work)?
Looks like a variation on H. Curry's fixed-point combinator to me, e.g.: y f = (\x -> f (x x)) (\x -> f (x x)) Which of course is perfectly useful, but not valid in any type system that excludes absurdities. The otherwise-infinite type signature for Y is circumvented by built-in recursion in Haskell (making halting undecidable in the process, unfortunately). - C.

On Mon, May 3, 2010 at 10:13 PM, Ivan Miljenovic
On 4 May 2010 13:30, Luke Palmer
wrote: Here is a contrived example of what I am referring to:
prefac f 0 = 1 prefac f n = n * f (n-1)
fac = (\x -> x x) (\x -> prefac (x x))
I can't work out how this works (or should work rather); is it meant to be using church numerals or something (assuming that they have been made an instance of Num so that - and * work)?
No they're just integers. fac is a beta expansion of fix prefac. Obseve the magic: (\x -> x x) (\x -> prefac (x x)) 2 (\x -> prefac (x x)) (\x -> prefac (x x)) 2 prefac ((\x -> prefac (x x)) (\x -> prefac (x x))) 2 2 * ((\x -> prefac (x x)) (\x -> prefac (x x)) (2-1) 2 * prefac ((\x -> prefac (x x)) (\x -> prefac (x x))) (2-1) 2 * prefac ((\x -> prefac (x x)) (\x -> prefac (x x))) 1 2 * (1 * ((\x -> prefac (x x)) (\x -> prefac (x x))) (1-1)) 2 * (1 * prefac ((\x -> prefac (x x)) (\x -> prefac (x x))) (1-1)) 2 * (1 * prefac ((\x -> prefac (x x)) (\x -> prefac (x x))) 0) 2 * (1 * 1) 2 Luke

On 04/05/2010, at 13:30, Luke Palmer wrote:
On Mon, May 3, 2010 at 11:07 AM, Kyle Murphy
wrote: The fact that it doesn't is proof enough that there's a problem with it even if that problem is simply that the types you're using aren't exactly correct. Further, I'd argue that in the first instance with a non-strict type system, the instance of wrong code that compiles would be higher. The only argument to support non-strict typing would be if you could show that it takes less time to track down runtime bugs than it does to fix compile time type errors, and any such claim I'd be highly skeptical of.
Clearly. But many people believe in this methodology, and use test suites and code coverage instead of types. Indeed, such practices are essentially "empirical type checking", and they afford the advantage that their verification is much more expressive (however less reliable) than our static type system, because they may use arbitrary code to express their predicates.
I don't think it's a question of types vs. testing. Rather, it's types + testing vs. just testing. How is the latter more expressive than the former for defining properties of programs? Also, testing loses a lot of appeal once you start dealing with concurrent programs. Testing for "this program doesn't have race conditions" isn't exactly easy. You want as many static guarantees as possible. Roman

Dear Kyle, Improving the example to use more idiomatic Haskell is a good idea. I'm happy for you to propose another approach entirely. I'm simply not skilled in Haskell to do anything better. However, thanks for writing some comments for the code below, it will certainly help. Kind regards, Samuel On 4/05/2010, at 3:17 AM, Kyle Murphy wrote:
Reasons to learn Haskell include: Lazy evaluation can make some kinds of algorithms possible to implement that aren't possible to implement in other languages (without modification to the algorithm). Strict type system allows for a maximum number of programming errors to be caught at compile time. Functional design makes designing for concurrency and parallelism simpler than in procedural or OO designs. Clear differentiation of impure code helps to isolate points of failure and to make the programmer more aware of possible side effects. Knowledge of high order functions can provide the programmer with unique ways of solving certain problems, even in non-functional languages. Excellent Foreign Function Interface makes integration with existing libraries relatively painless (compared with many other languages).
There are probably other reasons I'm not even aware of as I'm still a beginner myself, but to me those are some of the most important.
The program you provided seems like a poor example to demonstrate the language as it uses some very hard to follow logic for someone not familiar with the language. Furthmore the results can not be easily explained nor reasoned about simply (mostly due to the repeated application of id and toggle instances to the same list as foldl runs). Nevertheless I've attempted to provide what comments I can, although I'm sure someone can do better than me and I might have made a mistake somewhere.
--- Begin Code ---
{- This declares a new type, Door, two new constructors for the Door type, Open, and Closed, and tells the compiler to make Door an instance of the Show class which provides the function show which can be used to convert something into a String. I.E. show :: Door -> String -} data Door = Open | Closed deriving Show
toggle :: Door -> Door toggle Open = Closed -- New function to convert a Open Door to a Close Door. toggle Closed = Open
{- I broke this line down for easier understanding. This line takes two lists and combines them using the ($) operator. The first list is provided by converter, the second list is implicit as can be seen by the function signature provided below and consists of a list of Doors. In other words, it takes the function in the converter list, and applies it to the Door from the last argument to the function to produce a new list of Door objects. The list this produces can be thought of as looking something like the following when called with 3 for example:
[(id $ Door), (id $ Door), (id $ Door), (toggle $ Door), (id $ Door), (id $ Door)...] -} pass :: Int -> [Door] -> [Door] pass k = zipWith ($) converter where converter :: [Door -> Door] {- This produces a list of functions from Door to Door, it produces k id functions, one toggle function, and then repeats. id just returns whatever it's given. -} converter = cycle $ replicate k id ++ [toggle]
{- this creates two lists, one n long of Closed instances, and one from 0 to n. flip pass reverses the order of arguments to pass so that instead of taking a number and a list of Doors it instead takes a list of Doors and a number. foldl takes one number from the list, the list of Closed Door instances if this is the first time through, or the result of the last run, and passes them both to pass. -} run :: Int -> [Door] run n = foldl (flip pass) (replicate n Closed) [0..n]
main = print $ run 100
--- End Code ---
-R. Kyle Murphy -- Curiosity was framed, Ignorance killed the cat.
On Mon, May 3, 2010 at 06:43, Rafael Gustavo da Cunha Pereira Pinto
wrote: If you are running from GHCi, just type run 100 at the prompt.. If you intend to compile it, you have to add
main = print $ run 100
The compiler adds a call to main::IO (), which is intended to be the main entry point of your code.
We need to add print, as run has type
run::Int->[Door]
so run 100 has type [Door].
print is
print::(Show a) => a -> IO ()
The IO () stands for an empty IO monad, which is the black magic of haskell, intended to separate pure code from I/O side-effects...
On Mon, May 3, 2010 at 06:31, Samuel Williams
wrote: Also, one more thing - if someone could write some comments to go along with the source code that explain what it is doing, that would be really helpful. I can see the general structure, but I don't know the ins and outs of Haskell. If someone could augment the example with comments explaining what the functions do that would be great! data Door = Open | Closed deriving Show
toggle Open = Closed toggle Closed = Open
pass k = zipWith ($) (cycle $ replicate k id ++ [toggle])
run n = foldl (flip pass) (replicate n Closed) [0..n]
Do I need to add run 100 to the end of the example for it to actually do something?
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Rafael Gustavo da Cunha Pereira Pinto
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Dear Kyle, I've recevied the following program. You did a fantastic job of explaining the other one, but as you said it wasn't a great approach, if you have a moment could you explain this one? doorOpen :: Int -> Bool doorOpen door = doh door door doh :: Int -> Int -> Bool doh door 0 = True doh door pass = if (door `rem` (pass+1)) == pass then not (doh door (pass-1)) else doh door (pass-1) doors :: [Bool] doors = [doorOpen n | n <- [0..]] printDoor :: (Int,Bool) -> IO () printDoor (door,open) = putStrLn ("Door #" ++ (show door) ++ " is " ++ if open then "open." else "closed.") printUpTo :: Int -> IO () printUpTo n = mapM_ printDoor (zip [0..(n-1)] doors) printUpTo 100 Kind regards, Samuel

Alright, here's my attempt to add comments, although once again it seems like the choice of algorithm in this example is a little wierd. By checking the output I was able to determine that it results in True for values of n*n-1, but why exactly that is I can't really figure out at the moment. If I actually sat down with a sheet of paper and walked through the recursion I might be able to make sense out of it. --- Begin Code --- {- This function takes an Int, and passes it to doh twice returning the result. -} doorOpen :: Int -> Bool doorOpen door = doh door door {- This function takes two Ints, returning True when the second Int is 0. If the second Int isn't 0, it checks to see if the first Int modulus the second Int plus one, is equal to the second Int. This condition will only be true when the first number has the following relationship to the second one: n*i+i. E. G. given 10 as the second number, this would be true for 10, 21, 32, 43, etc. If the condition is true it calls itself recursively while decrementing the second Int by one, and inverting the return value. If the condition is false it calls itself recursively while decrementing the second Int by one, and returns the result unmodified. -} doh :: Int -> Int -> Bool doh door 0 = True doh door pass = if (door `rem` (pass+1)) == pass then not (doh door (pass-1)) else doh door (pass-1) {- This produces an infinite list created by calling doorOpen with the numbers 0 to infinity -} doors :: [Bool] doors = [doorOpen n | n <- [0..]] {- Utility function to print a tuple with some explanation text. Note that this is inside the IO monad and therefore impure. -} printDoor :: (Int,Bool) -> IO () printDoor (door,open) = putStrLn ("Door #" ++ (show door) ++ " is " ++ if open then "open." else "closed.") {- Given an Int this prints the first n elements from the doors list. This works because zip only produces a list as long as the shortest of its two arguments. mapM_ is a varient of map that functions on monads and that discards its result. Ordinarily this would be pointless and might as well be a no-op, but because printDoor executes inside the IO monad it can have side effects from executing, and therefore must be evaluated every time. -} printUpTo :: Int -> IO () printUpTo n = mapM_ printDoor (zip [0..(n-1)] doors) {- The main entry point to the program, calls printUpTo with 100 -} main :: IO () main = printUpTo 100 --- End Code --- -R. Kyle Murphy -- Curiosity was framed, Ignorance killed the cat. On Mon, May 3, 2010 at 13:15, Samuel Williams < space.ship.traveller@gmail.com> wrote:
Dear Kyle,
I've recevied the following program. You did a fantastic job of explaining the other one, but as you said it wasn't a great approach, if you have a moment could you explain this one?
doorOpen :: Int -> Bool doorOpen door = doh door door
doh :: Int -> Int -> Bool doh door 0 = True doh door pass = if (door `rem` (pass+1)) == pass then not (doh door (pass-1)) else doh door (pass-1)
doors :: [Bool] doors = [doorOpen n | n <- [0..]]
printDoor :: (Int,Bool) -> IO () printDoor (door,open) = putStrLn ("Door #" ++ (show door) ++ " is " ++ if open then "open." else "closed.")
printUpTo :: Int -> IO () printUpTo n = mapM_ printDoor (zip [0..(n-1)] doors)
printUpTo 100
Kind regards, Samuel

Here is my attempt. I tried to avoid higher concepts like folds and things like the ($) operator. Most recursions are written explicitly. {---- BEGIN CODE ----} module Main where -- Data type representing a door which is either Open or Closed. data Door = Open | Closed deriving Show toggle :: Door -> Door toggle Open = Closed toggle Closed = Open -- Applies the function f to every n'th element of a list. skipMap :: (a -> a) -> Int -> [a] -> [a] skipMap f n | n < 1 = error "skipMap: step < 1" | otherwise = go (n - 1) where -- Apply the function 'f' to an element of the list when the -- counter reaches 0, otherwise leave the element untouched. go _ [] = [] go 0 (x:xs) = f x : go (n - 1) xs go c (x:xs) = x : go (c - 1) xs -- Calculate the final answer. run :: Int -> [Door] run n = go 1 initialDoors -- Start by toggling every door. where -- Initial list of closed doors initialDoors :: [Door] initialDoors = replicate n Closed -- Toggle every c doors, then proceed by toggling every c+1 doors -- of the result, etcetera... Stops after toggling the n'th door. go :: Int -> [Door] -> [Door] go c doors | c > n = doors | otherwise = go (c + 1) (skipMap toggle c doors) -- Print information about a single door. printDoor :: (Int, Door) -> IO () printDoor (n, door) = putStrLn ("Door #" ++ show n ++ " is " ++ show door) printRun :: Int -> IO () printRun n = mapM_ printDoor (zip [1..n] (run n)) -- The program entry point. main :: IO () main = printRun 100 {---- END CODE ----}

Thanks Roel and Kyle for your contributions! On 4/05/2010, at 10:35 PM, Roel van Dijk wrote:
Here is my attempt. I tried to avoid higher concepts like folds and things like the ($) operator. Most recursions are written explicitly.
{---- BEGIN CODE ----}
module Main where
-- Data type representing a door which is either Open or Closed. data Door = Open | Closed deriving Show
toggle :: Door -> Door toggle Open = Closed toggle Closed = Open
-- Applies the function f to every n'th element of a list. skipMap :: (a -> a) -> Int -> [a] -> [a] skipMap f n | n < 1 = error "skipMap: step < 1" | otherwise = go (n - 1) where -- Apply the function 'f' to an element of the list when the -- counter reaches 0, otherwise leave the element untouched. go _ [] = [] go 0 (x:xs) = f x : go (n - 1) xs go c (x:xs) = x : go (c - 1) xs
-- Calculate the final answer. run :: Int -> [Door] run n = go 1 initialDoors -- Start by toggling every door. where -- Initial list of closed doors initialDoors :: [Door] initialDoors = replicate n Closed
-- Toggle every c doors, then proceed by toggling every c+1 doors -- of the result, etcetera... Stops after toggling the n'th door. go :: Int -> [Door] -> [Door] go c doors | c > n = doors | otherwise = go (c + 1) (skipMap toggle c doors)
-- Print information about a single door. printDoor :: (Int, Door) -> IO () printDoor (n, door) = putStrLn ("Door #" ++ show n ++ " is " ++ show door)
printRun :: Int -> IO () printRun n = mapM_ printDoor (zip [1..n] (run n))
-- The program entry point. main :: IO () main = printRun 100
{---- END CODE ----}
participants (13)
-
Casey Hawthorne
-
Casey McCann
-
Daniel Peebles
-
Don Stewart
-
Edward Kmett
-
Ivan Miljenovic
-
Kyle Murphy
-
Luke Palmer
-
Rafael Gustavo da Cunha Pereira Pinto
-
Richard O'Keefe
-
Roel van Dijk
-
Roman Leshchinskiy
-
Samuel Williams